2016-02-09 18:55:57 +00:00
[![Build Status ](https://travis-ci.org/kubespray/setup-kubernetes.svg )](https://travis-ci.org/kubespray/setup-kubernetes)
2015-10-03 20:49:48 +00:00
kubernetes-ansible
========
2016-01-15 08:29:28 +00:00
This project allows to
- Install and configure a **Multi-Master/HA kubernetes** cluster.
2016-01-15 09:43:17 +00:00
- Choose the **network plugin** to be used within the cluster
2016-01-15 08:29:28 +00:00
- A **set of roles** in order to install applications over the k8s cluster
- A **flexible method** which helps to create new roles for apps.
2015-10-03 20:49:48 +00:00
2016-01-09 09:45:50 +00:00
Linux distributions tested:
* **Debian** Wheezy, Jessie
* **Ubuntu** 14.10, 15.04, 15.10
* **Fedora** 23
* **CentOS** 7 (Currently with flannel only)
2015-10-04 08:55:52 +00:00
### Requirements
2016-01-09 09:45:50 +00:00
* The target servers must have **access to the Internet** in order to pull docker imaqes.
2016-01-27 16:02:41 +00:00
* The **firewalls are not managed** , you'll need to implement your own rules the way you used to.
in order to avoid any issue during deployment you should disable your firewall
2016-01-15 08:29:28 +00:00
* **Copy your ssh keys** to all the servers part of your inventory.
2016-01-26 14:31:11 +00:00
* **Ansible v2.x and python-netaddr**
2016-01-15 09:39:34 +00:00
* Base knowledge on Ansible. Please refer to [Ansible documentation ](http://www.ansible.com/how-ansible-works )
2015-10-04 08:55:52 +00:00
### Components
2016-02-11 22:08:16 +00:00
* [kubernetes ](https://github.com/kubernetes/kubernetes/releases ) v1.1.7
2016-01-19 14:23:19 +00:00
* [etcd ](https://github.com/coreos/etcd/releases ) v2.2.4
2016-01-30 15:04:47 +00:00
* [calicoctl ](https://github.com/projectcalico/calico-docker/releases ) v0.16.0
2015-11-22 12:34:29 +00:00
* [flanneld ](https://github.com/coreos/flannel/releases ) v0.5.5
2016-02-09 18:55:57 +00:00
* [weave ](http://weave.works/ ) v1.4.4
2015-12-11 10:53:42 +00:00
* [docker ](https://www.docker.com/ ) v1.9.1
2015-10-04 08:55:52 +00:00
2015-12-12 18:32:18 +00:00
Quickstart
-------------------------
The following steps will quickly setup a kubernetes cluster with default configuration.
2015-12-13 15:59:22 +00:00
These defaults are good for tests purposes.
2015-12-12 18:32:18 +00:00
Edit the inventory according to the number of servers
```
[kube-master]
2016-01-27 16:02:41 +00:00
node1
node2
2015-12-12 18:32:18 +00:00
2015-12-14 09:39:13 +00:00
[etcd]
2016-01-27 16:02:41 +00:00
node1
node2
node3
2015-12-14 09:39:13 +00:00
2015-12-12 18:32:18 +00:00
[kube-node]
2016-01-27 16:02:41 +00:00
node2
node3
node4
node5
node6
2015-12-12 18:32:18 +00:00
[k8s-cluster:children]
kube-node
kube-master
```
Run the playbook
```
2015-12-18 12:21:22 +00:00
ansible-playbook -i inventory/inventory.cfg cluster.yml -u root
2015-12-12 18:32:18 +00:00
```
2015-12-13 15:59:22 +00:00
You can jump directly to "*Available apps, installation procedure*"
2015-10-03 20:49:48 +00:00
Ansible
-------------------------
2015-10-04 08:55:52 +00:00
### Variables
2015-12-18 12:21:22 +00:00
The main variables to change are located in the directory ```inventory/group_vars/all.yml```.
2015-10-04 08:55:52 +00:00
2015-10-27 14:42:46 +00:00
### Inventory
Below is an example of an inventory.
2015-10-28 09:49:09 +00:00
2015-10-27 14:42:46 +00:00
```
2016-01-27 16:02:41 +00:00
## Configure 'ip' variable to bind kubernetes services on a
## different ip than the default iface
node1 ansible_ssh_host=95.54.0.12 # ip=10.3.0.1
node2 ansible_ssh_host=95.54.0.13 # ip=10.3.0.2
node3 ansible_ssh_host=95.54.0.14 # ip=10.3.0.3
node4 ansible_ssh_host=95.54.0.15 # ip=10.3.0.4
node5 ansible_ssh_host=95.54.0.16 # ip=10.3.0.5
node6 ansible_ssh_host=95.54.0.17 # ip=10.3.0.6
2015-12-18 15:40:58 +00:00
2015-10-27 14:42:46 +00:00
[kube-master]
2016-01-27 16:02:41 +00:00
node1
node2
2015-10-27 14:42:46 +00:00
2015-12-14 09:39:13 +00:00
[etcd]
2016-01-27 16:02:41 +00:00
node1
node2
node3
2015-12-14 09:39:13 +00:00
2015-10-27 14:42:46 +00:00
[kube-node]
2016-01-27 16:02:41 +00:00
node2
node3
node4
node5
node6
2015-10-27 14:42:46 +00:00
[k8s-cluster:children]
kube-node
kube-master
```
2015-10-04 08:55:52 +00:00
### Playbook
```
---
2015-12-13 15:59:22 +00:00
- hosts: k8s-cluster
roles:
2016-01-30 15:04:47 +00:00
- { role: adduser, tags: adduser }
2016-01-25 17:23:55 +00:00
- { role: download, tags: download }
2016-01-04 16:00:40 +00:00
- { role: kubernetes/preinstall, tags: preinstall }
2016-01-30 15:04:47 +00:00
- { role: etcd, tags: etcd }
2015-10-04 08:55:52 +00:00
- { role: docker, tags: docker }
2016-01-04 16:00:40 +00:00
- { role: kubernetes/node, tags: node }
2016-01-30 15:04:47 +00:00
- { role: network_plugin, tags: network }
2015-10-04 08:55:52 +00:00
- { role: dnsmasq, tags: dnsmasq }
2015-12-13 15:59:22 +00:00
2015-10-04 08:55:52 +00:00
- hosts: kube-master
roles:
- { role: kubernetes/master, tags: master }
```
### Run
2015-10-03 20:49:48 +00:00
It is possible to define variables for different environments.
For instance, in order to deploy the cluster on 'dev' environment run the following command.
```
2015-12-18 12:21:22 +00:00
ansible-playbook -i inventory/dev/inventory.cfg cluster.yml -u root
2015-10-03 20:49:48 +00:00
```
Kubernetes
-------------------------
2015-12-13 15:59:22 +00:00
### Multi master notes
2016-01-15 08:29:28 +00:00
* You can choose where to install the master components. If you want your master node to act both as master (api,scheduler,controller) and node (e.g. accept workloads, create pods ...),
2015-12-13 15:59:22 +00:00
the server address has to be present on both groups 'kube-master' and 'kube-node'.
2015-12-18 12:21:22 +00:00
* For safety reasons, you should have at least two master nodes and 3 etcd servers
2015-12-15 16:16:19 +00:00
2015-12-18 12:32:03 +00:00
* Kube-proxy doesn't support multiple apiservers on startup ([Issue 18174](https://github.com/kubernetes/kubernetes/issues/18174)). An external loadbalancer needs to be configured.
2016-01-15 08:29:28 +00:00
In order to do so, some variables have to be used '**loadbalancer_apiserver**' and '**apiserver_loadbalancer_domain_name**'
2015-10-04 08:55:52 +00:00
2016-01-30 15:04:47 +00:00
### Network Plugin
2016-02-09 18:55:57 +00:00
You can choose between 3 network plugins. Only one must be chosen.
2015-10-04 19:38:34 +00:00
2015-12-18 12:32:03 +00:00
* **flannel**: gre/vxlan (layer 2) networking. ([official docs](https://github.com/coreos/flannel))
2015-10-04 19:38:34 +00:00
2015-12-18 12:32:03 +00:00
* **calico**: bgp (layer 3) networking. ([official docs](http://docs.projectcalico.org/en/0.13/))
2015-10-04 19:25:09 +00:00
2016-02-09 18:55:57 +00:00
* **weave**: Weave is a lightweight container overlay network that doesn't require an external K/V database cluster. ([official docs](http://weave.works/docs/))
2016-01-30 15:04:47 +00:00
The choice is defined with the variable **kube_network_plugin**
2015-10-04 08:55:52 +00:00
2015-10-03 20:49:48 +00:00
2015-10-04 19:25:09 +00:00
### Check cluster status
#### Kubernetes components
2015-10-04 19:59:09 +00:00
* Check the status of the processes
```
2015-12-18 12:21:22 +00:00
systemctl status kubelet
2015-10-04 19:59:09 +00:00
```
* Check the logs
```
2015-12-18 12:21:22 +00:00
journalctl -ae -u kubelet
2015-10-04 19:59:09 +00:00
```
* Check the NAT rules
```
iptables -nLv -t nat
```
2015-12-18 12:21:22 +00:00
For the master nodes you'll have to see the docker logs for the apiserver
```
docker logs [apiserver docker id]
```
2015-10-04 19:25:09 +00:00
2015-10-20 08:18:30 +00:00
### Available apps, installation procedure
There are two ways of installing new apps
#### Ansible galaxy
2015-10-14 09:42:45 +00:00
Additionnal apps can be installed with ```ansible-galaxy```.
2015-10-18 20:21:08 +00:00
2016-01-27 16:02:41 +00:00
you'll need to edit the file '*requirements.yml*' in order to chose needed apps.
2015-10-14 09:42:45 +00:00
The list of available apps are available [there ](https://github.com/ansibl8s )
2015-10-05 09:27:13 +00:00
2015-11-22 12:34:29 +00:00
For instance it is **strongly recommanded** to install a dns server which resolves kubernetes service names.
2016-01-15 08:29:28 +00:00
In order to use this role you'll need the following entries in the file '*requirements.yml*'
2015-12-18 12:32:03 +00:00
Please refer to the [k8s-kubedns readme ](https://github.com/ansibl8s/k8s-kubedns ) for additionnal info.
2015-10-11 07:48:58 +00:00
```
2015-10-14 09:42:45 +00:00
- src: https://github.com/ansibl8s/k8s-common.git
path: roles/apps
# version: v1.0
2015-10-18 14:23:01 +00:00
- src: https://github.com/ansibl8s/k8s-kubedns.git
2015-10-14 09:42:45 +00:00
path: roles/apps
# version: v1.0
2015-10-11 07:48:58 +00:00
```
2015-10-14 09:42:45 +00:00
**Note**: the role common is required by all the apps and provides the tasks and libraries needed.
2015-10-18 20:21:08 +00:00
And empty the apps directory
```
rm -rf roles/apps/*
```
2015-10-14 09:47:12 +00:00
Then download the roles with ansible-galaxy
```
ansible-galaxy install -r requirements.yml
```
2015-11-22 12:34:29 +00:00
Finally update the playbook ```apps.yml``` with the chosen roles, and run it
2015-10-11 07:48:58 +00:00
```
...
- hosts: kube-master
roles:
2015-10-18 14:23:01 +00:00
- { role: apps/k8s-kubedns, tags: ['kubedns', 'apps'] }
2015-10-11 07:48:58 +00:00
...
```
2015-11-22 12:34:29 +00:00
```
2016-01-08 15:04:17 +00:00
ansible-playbook -i inventory/inventory.cfg apps.yml -u root
2015-11-22 12:34:29 +00:00
```
2016-01-15 08:29:28 +00:00
#### Git submodules
Alternatively the roles can be installed as git submodules.
That way is easier if you want to do some changes and commit them.
2015-10-05 09:27:13 +00:00
2016-01-15 09:44:54 +00:00
### Networking
2016-01-19 14:23:19 +00:00
#### Calico
2015-10-04 19:25:09 +00:00
Check if the calico-node container is running
```
docker ps | grep calico
```
The **calicoctl** command allows to check the status of the network workloads.
* Check the status of Calico nodes
```
calicoctl status
```
* Show the configured network subnet for containers
```
calicoctl pool show
```
2015-10-04 19:59:09 +00:00
* Show the workloads (ip addresses of containers and their located)
2015-10-04 19:25:09 +00:00
```
calicoctl endpoint show --detail
```
2016-01-09 09:45:50 +00:00
2016-01-27 16:02:41 +00:00
##### Optionnal : BGP Peering with border routers
In some cases you may want to route the pods subnet and so NAT is not needed on the nodes.
For instance if you have a cluster spread on different locations and you want your pods to talk each other no matter where they are located.
The following variables need to be set:
**peer_with_router** enable the peering with border router of the datacenter (default value: false).
you'll need to edit the inventory and add a and a hostvar **local_as** by node.
```
node1 ansible_ssh_host=95.54.0.12 local_as=xxxxxx
```
2016-01-19 14:23:19 +00:00
#### Flannel
2015-10-04 19:25:09 +00:00
2016-01-09 09:45:50 +00:00
* Flannel configuration file should have been created there
```
cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.233.0.0/18
FLANNEL_SUBNET=10.233.16.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=false
```
* Check if the network interface has been created
```
ip a show dev flannel.1
4: flannel.1: < BROADCAST , MULTICAST , UP , LOWER_UP > mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether e2:f3:a7:0f:bf:cb brd ff:ff:ff:ff:ff:ff
inet 10.233.16.0/18 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::e0f3:a7ff:fe0f:bfcb/64 scope link
valid_lft forever preferred_lft forever
```
* Docker must be configured with a bridge ip in the flannel subnet.
```
ps aux | grep docker
root 20196 1.7 2.7 1260616 56840 ? Ssl 10:18 0:07 /usr/bin/docker daemon --bip=10.233.16.1/24 --mtu=1450
```
* Try to run a container and check its ip address
```
kubectl run test --image=busybox --command -- tail -f /dev/null
replicationcontroller "test" created
kubectl describe po test-34ozs | grep ^IP
IP: 10.233.16.2
```
```
kubectl exec test-34ozs -- ip a show dev eth0
8: eth0@if9: < BROADCAST , MULTICAST , UP , LOWER_UP , M-DOWN > mtu 1450 qdisc noqueue
link/ether 02:42:0a:e9:2b:03 brd ff:ff:ff:ff:ff:ff
inet 10.233.16.2/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:aff:fee9:2b03/64 scope link tentative flags 08
valid_lft forever preferred_lft forever
```
2015-11-22 12:34:29 +00:00
Congrats ! now you can walk through [kubernetes basics ](http://kubernetes.io/v1.1/basicstutorials.html )