2022-05-16 07:50:17 +00:00
# Ansible
## Installing Ansible
Kubespray supports multiple ansible versions and ships different `requirements.txt` files for them.
Depending on your available python version you may be limited in chooding which ansible version to use.
It is recommended to deploy the ansible version used by kubespray into a python virtual environment.
```ShellSession
VENVDIR=kubespray-venv
KUBESPRAYDIR=kubespray
ANSIBLE_VERSION=2.12
virtualenv --python=$(which python3) $VENVDIR
source $VENVDIR/bin/activate
2022-06-06 14:42:23 +00:00
cd $KUBESPRAYDIR
2022-05-16 07:50:17 +00:00
pip install -U -r requirements-$ANSIBLE_VERSION.txt
test -f requirements-$ANSIBLE_VERSION.yml & & \
ansible-galaxy role install -r requirements-$ANSIBLE_VERSION.yml & & \
ansible-galaxy collection -r requirements-$ANSIBLE_VERSION.yml
```
### Ansible Python Compatibility
Based on the table below and the available python version for your ansible host you should choose the appropriate ansible version to use with kubespray.
| Ansible Version | Python Version |
| --------------- | -------------- |
| 2.9 | 2.7,3.5-3.8 |
| 2.10 | 2.7,3.5-3.8 |
| 2.11 | 2.7,3.5-3.9 |
| 2.12 | 3.8-3.10 |
2016-07-04 12:13:18 +00:00
2019-12-04 15:22:57 +00:00
## Inventory
2016-07-04 12:13:18 +00:00
2016-07-11 14:05:05 +00:00
The inventory is composed of 3 groups:
2016-07-04 12:13:18 +00:00
2021-04-29 12:20:50 +00:00
* **kube_node** : list of kubernetes nodes where the pods will run.
2021-03-24 00:26:05 +00:00
* **kube_control_plane** : list of servers where kubernetes control plane components (apiserver, scheduler, controller) will run.
2017-02-14 10:08:27 +00:00
* **etcd**: list of servers to compose the etcd server. You should have at least 3 servers for failover purpose.
2016-07-04 12:13:18 +00:00
2021-04-29 12:20:50 +00:00
Note: do not modify the children of _k8s_cluster_, like putting
the _etcd_ group into the _k8s_cluster_, unless you are certain
2017-01-11 11:46:44 +00:00
to do that and you have it fully contained in the latter:
2019-12-04 15:22:57 +00:00
```ShellSession
2022-02-28 10:54:58 +00:00
etcd ⊂ k8s_cluster => kube_node ∩ etcd = etcd
2017-01-11 11:46:44 +00:00
```
2021-04-29 12:20:50 +00:00
When _kube_node_ contains _etcd_ , you define your etcd cluster to be as well schedulable for Kubernetes workloads.
2017-01-11 11:46:44 +00:00
If you want it a standalone, make sure those groups do not intersect.
2021-03-24 00:26:05 +00:00
If you want the server to act both as control-plane and node, the server must be defined
2021-04-29 12:20:50 +00:00
on both groups _kube_control_plane_ and _kube_node_. If you want a standalone and
2021-07-01 07:55:55 +00:00
unschedulable control plane, the server must be defined only in the _kube_control_plane_ and
2021-04-29 12:20:50 +00:00
not _kube_node_.
2017-01-11 11:46:44 +00:00
There are also two special groups:
2021-09-28 16:58:43 +00:00
* **calico_rr** : explained for [advanced Calico networking cases ](/docs/calico.md )
2017-01-11 11:46:44 +00:00
* **bastion** : configure a bastion host if your nodes are not directly reachable
2016-07-11 14:05:05 +00:00
Below is a complete inventory example:
2016-07-04 12:13:18 +00:00
2019-12-04 15:22:57 +00:00
```ini
2016-07-04 12:13:18 +00:00
## Configure 'ip' variable to bind kubernetes services on a
## different ip than the default iface
2019-04-23 06:36:09 +00:00
node1 ansible_host=95.54.0.12 ip=10.3.0.1
node2 ansible_host=95.54.0.13 ip=10.3.0.2
node3 ansible_host=95.54.0.14 ip=10.3.0.3
node4 ansible_host=95.54.0.15 ip=10.3.0.4
node5 ansible_host=95.54.0.16 ip=10.3.0.5
node6 ansible_host=95.54.0.17 ip=10.3.0.6
2016-07-04 12:13:18 +00:00
2021-03-24 00:26:05 +00:00
[kube_control_plane]
2016-07-04 12:13:18 +00:00
node1
node2
[etcd]
node1
node2
node3
2021-04-29 12:20:50 +00:00
[kube_node]
2016-07-04 12:13:18 +00:00
node2
node3
node4
node5
node6
2021-04-29 12:20:50 +00:00
[k8s_cluster:children]
kube_node
2021-03-24 00:26:05 +00:00
kube_control_plane
2016-07-04 12:13:18 +00:00
```
2019-12-04 15:22:57 +00:00
## Group vars and overriding variables precedence
2017-01-05 13:52:51 +00:00
2018-02-01 06:42:34 +00:00
The group variables to control main deployment options are located in the directory ``inventory/sample/group_vars``.
Optional variables are located in the `inventory/sample/group_vars/all.yml` .
2016-12-27 15:39:00 +00:00
Mandatory variables that are common for at least one role (or a node group) can be found in the
2021-04-29 12:20:50 +00:00
`inventory/sample/group_vars/k8s_cluster.yml` .
2021-07-01 07:55:55 +00:00
There are also role vars for docker, kubernetes preinstall and control plane roles.
2021-04-26 15:33:02 +00:00
According to the [ansible docs ](https://docs.ansible.com/ansible/latest/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable ),
2018-10-26 13:49:57 +00:00
those cannot be overridden from the group vars. In order to override, one should use
2019-12-04 15:22:57 +00:00
the `-e` runtime flags (most simple way) or other layers described in the docs.
2017-01-05 13:52:51 +00:00
2017-06-16 17:25:46 +00:00
Kubespray uses only a few layers to override things (or expect them to
2018-10-26 13:49:57 +00:00
be overridden for roles):
2017-01-05 13:52:51 +00:00
Layer | Comment
------|--------
2017-06-16 17:25:46 +00:00
**role defaults** | provides best UX to override things for Kubespray deployments
2017-01-05 13:52:51 +00:00
inventory vars | Unused
2021-04-29 12:20:50 +00:00
**inventory group_vars** | Expects users to use ``all.yml``,``k8s_cluster.yml`` etc. to override things
2017-01-05 13:52:51 +00:00
inventory host_vars | Unused
2017-06-12 11:20:15 +00:00
playbook group_vars | Unused
2017-01-05 13:52:51 +00:00
playbook host_vars | Unused
2017-06-16 17:25:46 +00:00
**host facts** | Kubespray overrides for internal roles' logic, like state flags
2017-01-05 13:52:51 +00:00
play vars | Unused
play vars_prompt | Unused
play vars_files | Unused
registered vars | Unused
2017-06-16 17:25:46 +00:00
set_facts | Kubespray overrides those, for some places
2017-01-05 13:52:51 +00:00
**role and include vars** | Provides bad UX to override things! Use extra vars to enforce
2017-06-16 17:25:46 +00:00
block vars (only for tasks in block) | Kubespray overrides for internal roles' logic
2017-01-05 13:52:51 +00:00
task vars (only for the task) | Unused for roles, but only for helper scripts
**extra vars** (always win precedence) | override with ``ansible-playbook -e @foo .yml``
2016-12-08 13:36:00 +00:00
2019-12-04 15:22:57 +00:00
## Ansible tags
2016-12-08 13:36:00 +00:00
The following tags are defined in playbooks:
2021-09-02 17:08:58 +00:00
| Tag name | Used for
|--------------------------------|---------
| annotate | Create kube-router annotation
| apps | K8s apps definitions
| asserts | Check tasks for download role
| aws-ebs-csi-driver | Configuring csi driver: aws-ebs
| azure-csi-driver | Configuring csi driver: azure
| bastion | Setup ssh config for bastion
| bootstrap-os | Anything related to host OS configuration
| calico | Network plugin Calico
| calico_rr | Configuring Calico route reflector
| canal | Network plugin Canal
| cephfs-provisioner | Configuring CephFS
| cert-manager | Configuring certificate manager for K8s
| cilium | Network plugin Cilium
| cinder-csi-driver | Configuring csi driver: cinder
| client | Kubernetes clients role
| cloud-provider | Cloud-provider related tasks
| cluster-roles | Configuring cluster wide application (psp ...)
| cni | CNI plugins for Network Plugins
| containerd | Configuring containerd engine runtime for hosts
| container_engine_accelerator | Enable nvidia accelerator for runtimes
| container-engine | Configuring container engines
| container-runtimes | Configuring container runtimes
| coredns | Configuring coredns deployment
| crio | Configuring crio container engine for hosts
| crun | Configuring crun runtime
| csi-driver | Configuring csi driver
| dashboard | Installing and configuring the Kubernetes Dashboard
| dns | Remove dns entries when resetting
| docker | Configuring docker engine runtime for hosts
| download | Fetching container images to a delegate host
| etcd | Configuring etcd cluster
| etcd-secrets | Configuring etcd certs/keys
| etchosts | Configuring /etc/hosts entries for hosts
| external-cloud-controller | Configure cloud controllers
| external-openstack | Cloud controller : openstack
| external-provisioner | Configure external provisioners
| external-vsphere | Cloud controller : vsphere
| facts | Gathering facts and misc check results
| files | Remove files when resetting
| flannel | Network plugin flannel
| gce | Cloud-provider GCP
| gcp-pd-csi-driver | Configuring csi driver: gcp-pd
| gvisor | Configuring gvisor runtime
| helm | Installing and configuring Helm
| ingress-controller | Configure ingress controllers
| ingress_alb | AWS ALB Ingress Controller
| init | Windows kubernetes init nodes
| iptables | Flush and clear iptable when resetting
| k8s-pre-upgrade | Upgrading K8s cluster
| k8s-secrets | Configuring K8s certs/keys
| k8s-gen-tokens | Configuring K8s tokens
| kata-containers | Configuring kata-containers runtime
| krew | Install and manage krew
| kubeadm | Roles linked to kubeadm tasks
| kube-apiserver | Configuring static pod kube-apiserver
| kube-controller-manager | Configuring static pod kube-controller-manager
2022-04-07 17:37:57 +00:00
| kube-vip | Installing and configuring kube-vip
2021-09-02 17:08:58 +00:00
| kubectl | Installing kubectl and bash completion
| kubelet | Configuring kubelet service
| kube-ovn | Network plugin kube-ovn
| kube-router | Network plugin kube-router
| kube-proxy | Configuring static pod kube-proxy
| localhost | Special steps for the localhost (ansible runner)
| local-path-provisioner | Configure External provisioner: local-path
| local-volume-provisioner | Configure External provisioner: local-volume
| macvlan | Network plugin macvlan
| master | Configuring K8s master node role
| metallb | Installing and configuring metallb
| metrics_server | Configuring metrics_server
| netchecker | Installing netchecker K8s app
| network | Configuring networking plugins for K8s
| mounts | Umount kubelet dirs when reseting
| multus | Network plugin multus
| nginx | Configuring LB for kube-apiserver instances
| node | Configuring K8s minion (compute) node role
| nodelocaldns | Configuring nodelocaldns daemonset
| node-label | Tasks linked to labeling of nodes
| node-webhook | Tasks linked to webhook (grating access to resources)
| nvidia_gpu | Enable nvidia accelerator for runtimes
| oci | Cloud provider: oci
| persistent_volumes | Configure csi volumes
| persistent_volumes_aws_ebs_csi | Configuring csi driver: aws-ebs
| persistent_volumes_cinder_csi | Configuring csi driver: cinder
| persistent_volumes_gcp_pd_csi | Configuring csi driver: gcp-pd
| persistent_volumes_openstack | Configuring csi driver: openstack
| policy-controller | Configuring Calico policy controller
| post-remove | Tasks running post-remove operation
| post-upgrade | Tasks running post-upgrade operation
| pre-remove | Tasks running pre-remove operation
| pre-upgrade | Tasks running pre-upgrade operation
| preinstall | Preliminary configuration steps
| registry | Configuring local docker registry
| reset | Tasks running doing the node reset
| resolvconf | Configuring /etc/resolv.conf for hosts/apps
| rbd-provisioner | Configure External provisioner: rdb
| services | Remove services (etcd, kubelet etc...) when resetting
| snapshot | Enabling csi snapshot
| snapshot-controller | Configuring csi snapshot controller
| upgrade | Upgrading, f.e. container images/binaries
| upload | Distributing images/binaries across hosts
| vsphere-csi-driver | Configuring csi driver: vsphere
| weave | Network plugin Weave
| win_nodes | Running windows specific tasks
2022-01-21 22:01:07 +00:00
| youki | Configuring youki runtime
2016-12-08 13:36:00 +00:00
Note: Use the ``bash scripts/gen_tags.sh`` command to generate a list of all
tags found in the codebase. New tags will be listed with the empty "Used for"
field.
2019-12-04 15:22:57 +00:00
## Example commands
2016-12-08 13:36:00 +00:00
Example command to filter and apply only DNS configuration tasks and skip
everything else related to host OS configuration and downloading images of containers:
2019-12-04 15:22:57 +00:00
```ShellSession
2019-04-01 19:32:34 +00:00
ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,facts --skip-tags=download,bootstrap-os
2016-12-08 13:36:00 +00:00
```
2019-12-04 15:22:57 +00:00
2016-12-08 13:36:00 +00:00
And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files:
2019-12-04 15:22:57 +00:00
```ShellSession
2019-04-01 19:32:34 +00:00
ansible-playbook -i inventory/sample/hosts.ini -e dns_mode='none' cluster.yml --tags resolvconf
2016-12-08 13:36:00 +00:00
```
2019-12-04 15:22:57 +00:00
2018-08-22 14:40:17 +00:00
And this prepares all container images locally (at the ansible runner node) without installing
2016-12-09 15:57:56 +00:00
or upgrading related stuff or trying to upload container to K8s cluster nodes:
2019-12-04 15:22:57 +00:00
```ShellSession
2018-02-01 06:42:34 +00:00
ansible-playbook -i inventory/sample/hosts.ini cluster.yml \
2016-12-09 15:57:56 +00:00
-e download_run_once=true -e download_localhost=true \
--tags download --skip-tags upload,upgrade
```
2016-12-08 13:36:00 +00:00
Note: use `--tags` and `--skip-tags` wise and only if you're 100% sure what you're doing.
2016-12-09 09:57:50 +00:00
2019-12-04 15:22:57 +00:00
## Bastion host
2016-12-09 09:57:50 +00:00
If you prefer to not make your nodes publicly accessible (nodes with private IPs only),
you can use a so called *bastion* host to connect to your nodes. To specify and use a bastion,
simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the
bastion host.
2019-12-04 15:22:57 +00:00
```ShellSession
2019-04-23 06:36:09 +00:00
[bastion]
bastion ansible_host=x.x.x.x
2016-12-09 09:57:50 +00:00
```
2017-01-05 13:52:51 +00:00
For more information about Ansible and bastion hosts, read
2020-02-13 22:46:17 +00:00
[Running Ansible Through an SSH Bastion Host ](https://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/ )
2020-04-24 12:20:07 +00:00
## Mitogen
2021-11-05 07:57:52 +00:00
Mitogen support is deprecated, please see [mitogen related docs ](/docs/mitogen.md ) for useage and reasons for deprecation.
2021-07-12 07:00:47 +00:00
## Beyond ansible 2.9
Ansible project has decided, in order to ease their maintenance burden, to split between
two projects which are now joined under the Ansible umbrella.
Ansible-base (2.10.x branch) will contain just the ansible language implementation while
ansible modules that were previously bundled into a single repository will be part of the
ansible 3.x package. Pleasee see [this blog post ](https://blog.while-true-do.io/ansible-release-3-0-0/ )
that explains in detail the need and the evolution plan.
**Note:** this change means that ansible virtual envs cannot be upgraded with `pip install -U` .
You first need to uninstall your old ansible (pre 2.10) version and install the new one.
```ShellSession
2022-03-29 22:36:11 +00:00
pip uninstall ansible ansible-base ansible-core
2021-07-12 07:00:47 +00:00
cd kubespray/
pip install -U .
```