Deploy a Production Ready Kubernetes Cluster on bare metal or raw VMs - This is a clone of https://github.com/kubernetes-sigs/kubespray.git with a kitten twist.
ansibleawsbare-metalgcehacktoberfesthigh-availabilityk8s-sig-cluster-lifecyclekuberneteskubernetes-clusterkubespray
bak | ||
ccp | ||
examples/kubernetes | ||
playbooks | ||
vagrant-scripts | ||
.gitignore | ||
bootstrap-master.sh | ||
custom.yaml | ||
deploy-k8s.kargo.sh | ||
deploy-netchecker.sh | ||
nodes_to_inv.py | ||
README.md | ||
Vagrantfile |
vagrant-k8s
Scripts to create libvirt lab with vagrant and prepare some stuff for k8s
deployment with kargo
.
Requirements
libvirt
vagrant
vagrant-libvirt
plugin (vagrant plugin install vagrant-libvirt
)$USER
should be able to connect to libvirt (test withvirsh list --all
)
Vargant lab preparation
- Change default IP pool for vagrant networks if you want:
export VAGRANT_POOL="10.100.0.0/16"
- Clone this repo
git clone https://github.com/adidenko/vagrant-k8s
cd vagrant-k8s
- Prepare the virtual lab:
vagrant up
Deployment on a lab
- Login to master node and sudo to root:
vagrant ssh $USER-k8s-00
sudo su -
- Clone this repo
git clone https://github.com/adidenko/vagrant-k8s ~/mcp
- Install required software and pull needed repos:
cd ~/mcp
./bootstrap-master.sh
- Check
nodes
list and make sure you have SSH access to them
cd ~/mcp
cat nodes
ansible all -m ping -i nodes_to_inv.py
- Deploy k8s using kargo playbooks
cd ~/mcp
./deploy-k8s.kargo.sh
- Deploy OpenStack CCP:
cd ~/mcp
# Build CCP images
ansible-playbook -i nodes_to_inv.py playbooks/ccp-build.yaml
# Deploy CCP
ansible-playbook -i nodes_to_inv.py playbooks/ccp-deploy.yaml
- Wait for CCP deployment to complete
# On k8s master node
# Check CCP pods, all should become running
kubectl --namespace=openstack get pods -o wide
# Check CCP jobs status, wait until all complete
kubectl --namespace=openstack get jobs
- Check Horizon:
# On k8s master node check nodePort of Horizon service
HORIZON_PORT=$(kubectl --namespace=openstack get svc/horizon -o go-template='{{(index .spec.ports 0).nodePort}}')
echo $HORIZON_PORT
# Access Horizon via nodePort
curl -i -s $ANY_K8S_NODE_IP:$HORIZON_PORT
Working with kubernetes
- Login to one of your kube-master nodes and run:
# List images in registry
curl -s 127.0.0.1:31500/v2/_catalog | python -mjson.tool
# Check CCP jobs status
kubectl --namespace=openstack get jobs
# Check CCP pods
kubectl --namespace=openstack get pods -o wide
- Troubleshooting
# Get logs from pod
kubectl --namespace=openstack logs $POD_NAME
# Exec command from pod
kubectl --namespace=openstack exec $POD_NAME -- cat /etc/resolv.conf
kubectl --namespace=openstack exec $POD_NAME -- curl http://etcd-client:2379/health
# Run a container
docker run -t -i 127.0.0.1:31500/mcp/neutron-dhcp-agent /bin/bash
- Network checker
cd ~/mcp
./deploy-netchecker.sh
# or in openstack namespace
./deploy-netchecker.sh openstack
- CCP
# Run a bash in one of containers
docker run -t -i 127.0.0.1:31500/mcp/nova-base /bin/bash
# Inside container export credentials
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_TENANT_NAME=admin
export OS_REGION_NAME=RegionOne
export OS_AUTH_URL=http://keystone:35357
# Run CLI commands
openstack service list
neutron agent-list