Deploy a Production Ready Kubernetes Cluster on bare metal or raw VMs - This is a clone of https://github.com/kubernetes-sigs/kubespray.git with a kitten twist.
Go to file
2016-07-11 17:26:06 +02:00
ccp Fix nodes in label-nodes.sh 2016-07-11 14:16:02 +02:00
examples/kubernetes Updated readme in examples 2016-06-29 12:48:21 +02:00
playbooks Add python to provision 2016-07-08 19:11:13 +02:00
vagrant-scripts Add python to provision 2016-07-08 19:11:13 +02:00
.gitignore Switch to using upstream fuel-ccp project 2016-07-07 19:50:16 +02:00
bootstrap-master.sh We need python-ipaddr on master node for dynamic inv 2016-07-08 17:52:26 +02:00
custom.yaml Change kube version to 1.2.4 2016-07-05 16:26:51 +02:00
deploy-ccp.sh Fix nodes in label-nodes.sh 2016-07-11 14:16:02 +02:00
deploy-k8s.kargo.sh Move node bootstrap to ansible 2016-07-08 18:52:42 +02:00
deploy-netchecker.sh Remove unneeded curl from netchecker deploy script 2016-07-05 15:24:09 +02:00
nodes_to_inv.py Fix dynamic ansible inventory 2016-07-11 10:28:25 +00:00
README.md Added some OpenStack CLI command examples 2016-07-11 17:26:06 +02:00
Vagrantfile Change nodes generation for Vagrant 2016-07-11 13:00:57 +02:00

vagrant-k8s

Scripts to create libvirt lab with vagrant and prepare some stuff for k8s deployment with kargo.

Requirements

  • libvirt
  • vagrant
  • vagrant-libvirt plugin (vagrant plugin install vagrant-libvirt)
  • $USER should be able to connect to libvirt (test with virsh list --all)

Vargant lab preparation

  • Change default IP pool for vagrant networks if you want:
export VAGRANT_POOL="10.100.0.0/16"
  • Clone this repo
git clone https://github.com/adidenko/vagrant-k8s
cd vagrant-k8s
  • Prepare the virtual lab:
vagrant up

Deployment on a lab

  • Login to master node and sudo to root:
vagrant ssh $USER-k8s-01
sudo su -
  • Clone this repo
git clone https://github.com/adidenko/vagrant-k8s ~/mcp
  • Install required software and pull needed repos (modify script if you're not running it on Vagrant lab, you'll need to create nodes list manually and clone microservices and microservices-repos repositories, see ccp-pull.sh for details)
cd ~/mcp
./bootstrap-master.sh
  • Check nodes list and make sure you have SSH access to them
cd ~/mcp
cat nodes
ansible all -m ping -i nodes_to_inv.py
  • Deploy k8s using kargo playbooks
cd ~/mcp
./deploy-k8s.kargo.sh
  • Deploy OpenStack CCP:
cd ~/mcp
./deploy-ccp.sh

Working with kubernetes

  • Login to one of your kube-master nodes (see /root/kargo/inventory/inventory.cfg on master node) and run:
# List images in registry
curl -s 127.0.0.1:31500/v2/_catalog | python -mjson.tool

# Check CCP jobs status
kubectl --namespace=openstack get jobs

# Check CCP pods
kubectl --namespace=openstack get pods -o wide
  • Troubleshooting
# Get logs from pod
kubectl --namespace=openstack logs $POD_NAME

# Exec command from pod
kubectl --namespace=openstack exec $POD_NAME -- cat /etc/resolv.conf
kubectl --namespace=openstack exec $POD_NAME -- curl http://etcd-client:2379/health

# Run a container
docker run -t -i 127.0.0.1:31500/mcp/neutron-dhcp-agent /bin/bash
  • Network checker
cd ~/mcp
./deploy-netchecker.sh
# or in openstack namespace
./deploy-netchecker.sh openstack
  • CCP
# Run a bash in one of containers
docker run -t -i 127.0.0.1:31500/mcp/nova-base /bin/bash

# Inside container export credentials
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_TENANT_NAME=admin
export OS_REGION_NAME=RegionOne
export OS_AUTH_URL=http://keystone:35357

# Run CLI commands
openstack service list
neutron agent-list