c12s-kubespray/README.md

289 lines
8 KiB
Markdown
Raw Normal View History

2015-10-03 20:49:48 +00:00
kubernetes-ansible
========
2015-12-18 12:21:22 +00:00
Install and configure a kubernetes cluster including network plugin.
2015-10-03 20:49:48 +00:00
2015-10-04 08:55:52 +00:00
### Requirements
2015-10-05 09:27:13 +00:00
Tested on **Debian Jessie** and **Ubuntu** (14.10, 15.04, 15.10).
2015-12-13 15:59:22 +00:00
* The target servers must have access to the Internet in order to pull docker imaqes.
* The firewalls are not managed, you'll need to implement your own rules the way you used to.
2015-10-03 20:49:48 +00:00
2015-10-04 08:55:52 +00:00
Ansible v1.9.x
### Components
2015-12-13 15:41:18 +00:00
* [kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.1.3
* [etcd](https://github.com/coreos/etcd/releases) v2.2.2
2015-12-18 12:21:22 +00:00
* [calicoctl](https://github.com/projectcalico/calico-docker/releases) v0.13.0
* [flanneld](https://github.com/coreos/flannel/releases) v0.5.5
2015-12-11 10:53:42 +00:00
* [docker](https://www.docker.com/) v1.9.1
2015-10-04 08:55:52 +00:00
2015-12-12 18:32:18 +00:00
Quickstart
-------------------------
The following steps will quickly setup a kubernetes cluster with default configuration.
2015-12-13 15:59:22 +00:00
These defaults are good for tests purposes.
2015-12-12 18:32:18 +00:00
Edit the inventory according to the number of servers
```
[downloader]
10.115.99.1
[kube-master]
10.115.99.31
2015-12-14 09:39:13 +00:00
[etcd]
10.115.99.31
10.115.99.32
2015-12-15 14:27:12 +00:00
10.115.99.33
2015-12-14 09:39:13 +00:00
2015-12-12 18:32:18 +00:00
[kube-node]
10.115.99.32
10.115.99.33
[k8s-cluster:children]
kube-node
kube-master
```
Run the playbook
```
2015-12-18 12:21:22 +00:00
ansible-playbook -i inventory/inventory.cfg cluster.yml -u root
2015-12-12 18:32:18 +00:00
```
2015-12-13 15:59:22 +00:00
You can jump directly to "*Available apps, installation procedure*"
2015-10-03 20:49:48 +00:00
Ansible
-------------------------
2015-10-04 08:55:52 +00:00
### Download binaries
A role allows to download required binaries. They will be stored in a directory defined by the variable
2015-10-04 19:25:09 +00:00
**'local_release_dir'** (by default /tmp).
2015-12-15 16:24:37 +00:00
Please ensure that you have enough disk space there (about **300M**).
2015-10-04 08:55:52 +00:00
2015-10-04 19:25:09 +00:00
**Note**: Whenever you'll need to change the version of a software, you'll have to erase the content of this directory.
2015-10-04 08:55:52 +00:00
### Variables
2015-12-18 12:21:22 +00:00
The main variables to change are located in the directory ```inventory/group_vars/all.yml```.
2015-10-04 08:55:52 +00:00
2015-10-27 14:42:46 +00:00
### Inventory
Below is an example of an inventory.
Note : The bgp vars local_as and peers are not mandatory if the var **'peer_with_router'** is set to false
By default this variable is set to false and therefore all the nodes are configure in **'node-mesh'** mode.
2015-10-28 09:49:09 +00:00
In node-mesh mode the nodes peers with all the nodes in order to exchange routes.
2015-10-27 14:42:46 +00:00
```
2015-12-18 15:40:58 +00:00
2015-10-27 14:42:46 +00:00
[downloader]
2015-12-18 15:40:58 +00:00
node1 ansible_ssh_host=10.99.0.26
2015-10-27 14:42:46 +00:00
[kube-master]
2015-12-18 15:40:58 +00:00
node1 ansible_ssh_host=10.99.0.26
node2 ansible_ssh_host=10.99.0.27
2015-10-27 14:42:46 +00:00
2015-12-14 09:39:13 +00:00
[etcd]
2015-12-18 15:40:58 +00:00
node1 ansible_ssh_host=10.99.0.26
node2 ansible_ssh_host=10.99.0.27
node3 ansible_ssh_host=10.99.0.4
2015-12-14 09:39:13 +00:00
2015-10-27 14:42:46 +00:00
[kube-node]
2015-12-18 15:40:58 +00:00
node2 ansible_ssh_host=10.99.0.27
node3 ansible_ssh_host=10.99.0.4
node4 ansible_ssh_host=10.99.0.5
node5 ansible_ssh_host=10.99.0.36
node6 ansible_ssh_host=10.99.0.37
2015-10-27 14:42:46 +00:00
2015-11-22 17:25:36 +00:00
[paris]
2015-12-18 15:40:58 +00:00
node1 ansible_ssh_host=10.99.0.26
node3 ansible_ssh_host=10.99.0.4 local_as=xxxxxxxx
node4 ansible_ssh_host=10.99.0.5 local_as=xxxxxxxx
[new-york]
node2 ansible_ssh_host=10.99.0.27
node5 ansible_ssh_host=10.99.0.36 local_as=xxxxxxxx
node6 ansible_ssh_host=10.99.0.37 local_as=xxxxxxxx
2015-10-27 14:42:46 +00:00
[k8s-cluster:children]
kube-node
kube-master
```
2015-10-04 08:55:52 +00:00
### Playbook
```
---
- hosts: downloader
sudo: no
roles:
- { role: download, tags: download }
2015-12-13 15:59:22 +00:00
- hosts: k8s-cluster
roles:
2015-12-15 16:24:37 +00:00
- { role: etcd, tags: etcd }
2015-10-04 08:55:52 +00:00
- { role: docker, tags: docker }
- { role: dnsmasq, tags: dnsmasq }
2015-12-13 15:59:22 +00:00
- { role: network_plugin, tags: ['calico', 'flannel', 'network'] }
2015-10-04 08:55:52 +00:00
- hosts: kube-master
roles:
- { role: kubernetes/master, tags: master }
- hosts: kube-node
roles:
- { role: kubernetes/node, tags: node }
2015-10-04 08:55:52 +00:00
```
### Run
2015-10-03 20:49:48 +00:00
It is possible to define variables for different environments.
For instance, in order to deploy the cluster on 'dev' environment run the following command.
```
2015-12-18 12:21:22 +00:00
ansible-playbook -i inventory/dev/inventory.cfg cluster.yml -u root
2015-10-03 20:49:48 +00:00
```
Kubernetes
-------------------------
2015-12-13 15:59:22 +00:00
### Multi master notes
* You can choose where to install the master components. If you want your master node to act both as master (api,scheduler,controller) and node (e.g. accept workloads, create pods ...),
the server address has to be present on both groups 'kube-master' and 'kube-node'.
* Almost all kubernetes components are running into pods except *kubelet*. These pods are managed by kubelet which ensure they're always running
2015-12-18 12:21:22 +00:00
* For safety reasons, you should have at least two master nodes and 3 etcd servers
2015-12-18 12:32:03 +00:00
* Kube-proxy doesn't support multiple apiservers on startup ([Issue 18174](https://github.com/kubernetes/kubernetes/issues/18174)). An external loadbalancer needs to be configured.
In order to do so, some variables have to be used '**loadbalancer_apiserver**' and '**apiserver_loadbalancer_domain_name**'
2015-12-13 15:59:22 +00:00
2015-10-04 08:55:52 +00:00
### Network Overlay
2015-10-27 14:42:46 +00:00
You can choose between 2 network plugins. Only one must be chosen.
2015-10-04 19:38:34 +00:00
2015-12-18 12:32:03 +00:00
* **flannel**: gre/vxlan (layer 2) networking. ([official docs](https://github.com/coreos/flannel))
2015-10-04 19:38:34 +00:00
2015-12-18 12:32:03 +00:00
* **calico**: bgp (layer 3) networking. ([official docs](http://docs.projectcalico.org/en/0.13/))
2015-10-04 19:25:09 +00:00
2015-10-27 14:42:46 +00:00
The choice is defined with the variable '**kube_network_plugin**'
2015-10-04 08:55:52 +00:00
### Expose a service
2015-10-04 19:38:34 +00:00
There are several loadbalancing solutions.
2015-12-18 12:32:03 +00:00
The one i found suitable for kubernetes are [Vulcand](http://vulcand.io/) and [Haproxy](http://www.haproxy.org/)
2015-10-04 08:55:52 +00:00
2015-10-04 19:59:09 +00:00
My cluster is working with haproxy and kubernetes services are configured with the loadbalancing type '**nodePort**'.
2015-10-04 08:55:52 +00:00
eg: each node opens the same tcp port and forwards the traffic to the target pod wherever it is located.
Then Haproxy can be configured to request kubernetes's api in order to loadbalance on the proper tcp port on the nodes.
2015-10-03 20:49:48 +00:00
2015-12-18 12:32:03 +00:00
Please refer to the proper kubernetes documentation on [Services](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/services.md)
2015-10-03 20:49:48 +00:00
2015-10-04 19:25:09 +00:00
### Check cluster status
#### Kubernetes components
2015-10-04 19:59:09 +00:00
* Check the status of the processes
```
2015-12-18 12:21:22 +00:00
systemctl status kubelet
2015-10-04 19:59:09 +00:00
```
* Check the logs
```
2015-12-18 12:21:22 +00:00
journalctl -ae -u kubelet
2015-10-04 19:59:09 +00:00
```
* Check the NAT rules
```
iptables -nLv -t nat
```
2015-12-18 12:21:22 +00:00
For the master nodes you'll have to see the docker logs for the apiserver
```
docker logs [apiserver docker id]
```
2015-10-04 19:25:09 +00:00
2015-10-20 08:18:30 +00:00
### Available apps, installation procedure
There are two ways of installing new apps
#### Ansible galaxy
2015-10-14 09:42:45 +00:00
Additionnal apps can be installed with ```ansible-galaxy```.
2015-10-18 20:21:08 +00:00
ou'll need to edit the file '*requirements.yml*' in order to chose needed apps.
2015-10-14 09:42:45 +00:00
The list of available apps are available [there](https://github.com/ansibl8s)
2015-10-05 09:27:13 +00:00
For instance it is **strongly recommanded** to install a dns server which resolves kubernetes service names.
2015-10-14 09:42:45 +00:00
In order to use this role you'll need the following entries in the file '*requirements.yml*'
2015-12-18 12:32:03 +00:00
Please refer to the [k8s-kubedns readme](https://github.com/ansibl8s/k8s-kubedns) for additionnal info.
2015-10-11 07:48:58 +00:00
```
2015-10-14 09:42:45 +00:00
- src: https://github.com/ansibl8s/k8s-common.git
path: roles/apps
# version: v1.0
- src: https://github.com/ansibl8s/k8s-kubedns.git
2015-10-14 09:42:45 +00:00
path: roles/apps
# version: v1.0
2015-10-11 07:48:58 +00:00
```
2015-10-14 09:42:45 +00:00
**Note**: the role common is required by all the apps and provides the tasks and libraries needed.
2015-10-18 20:21:08 +00:00
And empty the apps directory
```
rm -rf roles/apps/*
```
Then download the roles with ansible-galaxy
```
ansible-galaxy install -r requirements.yml
```
2015-10-20 08:18:30 +00:00
#### Git submodules
Alternatively the roles can be installed as git submodules.
That way is easier if you want to do some changes and commit them.
You can list available submodules with the following command:
```
grep path .gitmodules | sed 's/.*= //'
```
In order to install the dns addon you'll need to follow these steps
2015-10-20 08:18:30 +00:00
```
git submodule init roles/apps/k8s-common roles/apps/k8s-kubedns
git submodule update
```
Finally update the playbook ```apps.yml``` with the chosen roles, and run it
2015-10-11 07:48:58 +00:00
```
...
- hosts: kube-master
roles:
- { role: apps/k8s-kubedns, tags: ['kubedns', 'apps'] }
2015-10-11 07:48:58 +00:00
...
```
```
ansible-playbook -i environments/dev/inventory apps.yml -u root
```
2015-10-05 09:27:13 +00:00
2015-10-04 19:25:09 +00:00
#### Calico networking
Check if the calico-node container is running
```
docker ps | grep calico
```
The **calicoctl** command allows to check the status of the network workloads.
* Check the status of Calico nodes
```
calicoctl status
```
* Show the configured network subnet for containers
```
calicoctl pool show
```
2015-10-04 19:59:09 +00:00
* Show the workloads (ip addresses of containers and their located)
2015-10-04 19:25:09 +00:00
```
calicoctl endpoint show --detail
```
#### Flannel networking
Congrats ! now you can walk through [kubernetes basics](http://kubernetes.io/v1.1/basicstutorials.html)