591ae700ce
* Adding bastion and private network provisioning for openstack terraform * Remove usage of floating-ip property * Combine openstack instances + floating ips * Fix relating floating IPs to hosts for openstack builds * Tighten up security groups Allow ssh into all instances with floating IP * Add the gluster hosts to the no-floating group * Break terraform into modules * Update README and var descriptions to match current config * Remove volume property in gluster compute def * Include cluster name in internal network and router names * Make dns_nameservers a variable
307 lines
11 KiB
Markdown
307 lines
11 KiB
Markdown
# Kubernetes on Openstack with Terraform
|
|
|
|
Provision a Kubernetes cluster with [Terraform](https://www.terraform.io) on
|
|
Openstack.
|
|
|
|
## Status
|
|
|
|
This will install a Kubernetes cluster on an Openstack Cloud. It should work on
|
|
most modern installs of OpenStack that support the basic services.
|
|
|
|
## Approach
|
|
The terraform configuration inspects variables found in
|
|
[variables.tf](variables.tf) to create resources in your OpenStack cluster.
|
|
There is a [python script](../terraform.py) that reads the generated`.tfstate`
|
|
file to generate a dynamic inventory that is consumed by the main ansible script
|
|
to actually install kubernetes and stand up the cluster.
|
|
|
|
### Networking
|
|
The configuration includes creating a private subnet with a router to the
|
|
external net. It will allocate floating-ips from a pool and assign them to the
|
|
hosts where that makes sense. You have the option of creating bastion hosts
|
|
inside the private subnet to access the nodes there.
|
|
|
|
### Kubernetes Nodes
|
|
You can create many different kubernetes topologies by setting the number of
|
|
different classes of hosts. For each class there are options for allocating
|
|
floating ip addresses or not.
|
|
- Master Nodes with etcd
|
|
- Master nodes without etcd
|
|
- Standalone etcd hosts
|
|
- Kubernetes worker nodes
|
|
|
|
Note that the ansible script will report an invalid configuration if you wind up
|
|
with an even number of etcd instances since that is not a valid configuration.
|
|
|
|
### Gluster FS
|
|
The terraform configuration supports provisioning of an optional GlusterFS
|
|
shared file system based on a separate set of VMs. To enable this, you need to
|
|
specify
|
|
- the number of gluster hosts
|
|
- Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks
|
|
- Other properties related to provisioning the hosts
|
|
|
|
Even if you are using Container Linux by CoreOS for your cluster, you will still
|
|
need the GlusterFS VMs to be based on either Debian or RedHat based images,
|
|
Container Linux by CoreOS cannot serve GlusterFS, but can connect to it through
|
|
binaries available on hyperkube v1.4.3_coreos.0 or higher.
|
|
|
|
## Requirements
|
|
|
|
- [Install Terraform](https://www.terraform.io/intro/getting-started/install.html)
|
|
- [Install Ansible](http://docs.ansible.com/ansible/latest/intro_installation.html)
|
|
- you already have a suitable OS image in glance
|
|
- you already have a floating-ip pool created
|
|
- you have security-groups enabled
|
|
- you have a pair of keys generated that can be used to secure the new hosts
|
|
|
|
## Module Architecture
|
|
The configuration is divided into three modules:
|
|
- Network
|
|
- IPs
|
|
- Compute
|
|
|
|
The main reason for splitting the configuration up in this way is to easily
|
|
accommodate situations where floating IPs are limited by a quota or if you have
|
|
any external references to the floating IP (e.g. DNS) that would otherwise have
|
|
to be updated.
|
|
|
|
You can force your existing IPs by modifying the compute variables in
|
|
`kubespray.tf` as
|
|
|
|
```
|
|
k8s_master_fips = ["151.101.129.67"]
|
|
k8s_node_fips = ["151.101.129.68"]
|
|
```
|
|
|
|
## Terraform
|
|
Terraform will be used to provision all of the OpenStack resources. It is also
|
|
used to deploy and provision the software requirements.
|
|
|
|
### Prep
|
|
|
|
#### OpenStack
|
|
|
|
Ensure your OpenStack **Identity v2** credentials are loaded in environment
|
|
variables. This can be done by downloading a credentials .rc file from your
|
|
OpenStack dashboard and sourcing it:
|
|
|
|
```
|
|
$ source ~/.stackrc
|
|
```
|
|
|
|
Ensure that you have your Openstack credentials loaded into Terraform
|
|
environment variables. Likely via a command similar to:
|
|
|
|
```
|
|
$ echo Setting up Terraform creds && \
|
|
export TF_VAR_username=${OS_USERNAME} && \
|
|
export TF_VAR_password=${OS_PASSWORD} && \
|
|
export TF_VAR_tenant=${OS_TENANT_NAME} && \
|
|
export TF_VAR_auth_url=${OS_AUTH_URL}
|
|
```
|
|
|
|
### Terraform Variables
|
|
The construction of the cluster is driven by values found in
|
|
[variables.tf](variables.tf).
|
|
|
|
The best way to set these values is to create a file in the project's root
|
|
directory called something like`my-terraform-vars.tfvars`. Many of the
|
|
variables are obvious. Here is a summary of some of the more interesting
|
|
ones:
|
|
|
|
|Variable | Description |
|
|
|---------|-------------|
|
|
|`cluster_name` | All OpenStack resources will use the Terraform variable`cluster_name` (default`example`) in their name to make it easier to track. For example the first compute resource will be named`example-kubernetes-1`. |
|
|
|`network_name` | The name to be given to the internal network that will be generated |
|
|
|`dns_nameservers`| An array of DNS name server names to be used by hosts in the internal subnet. |
|
|
|`floatingip_pool` | Name of the pool from which floating IPs will be allocated |
|
|
|`external_net` | UUID of the external network that will be routed to |
|
|
|`flavor_k8s_master`,`flavor_k8s_node`,`flavor_etcd`, `flavor_bastion`,`flavor_gfs_node` | Flavor depends on your openstack installation, you can get available flavor IDs through`nova flavor-list` |
|
|
|`image`,`image_gfs` | Name of the image to use in provisioning the compute resources. Should already be loaded into glance. |
|
|
|`ssh_user`,`ssh_user_gfs` | The username to ssh into the image with. This usually depends on the image you have selected |
|
|
|`public_key_path` | Path on your local workstation to the public key file you wish to use in creating the key pairs |
|
|
|`number_of_k8s_masters`, `number_of_k8s_masters_no_floating_ip` | Number of nodes that serve as both master and etcd. These can be provisioned with or without floating IP addresses|
|
|
|`number_of_k8s_masters_no_etcd`, `number_of_k8s_masters_no_floating_ip_no_etcd` | Number of nodes that serve as just master with no etcd. These can be provisioned with or without floating IP addresses |
|
|
|`number_of_etcd` | Number of pure etcd nodes |
|
|
|`number_of_k8s_nodes`, `number_of_k8s_nodes_no_floating_ip` | Kubernetes worker nodes. These can be provisioned with or without floating ip addresses. |
|
|
|`number_of_bastions` | Number of bastion hosts to create. Scripts assume this is really just zero or one |
|
|
|`number_of_gfs_nodes_no_floating_ip` | Number of gluster servers to provision. |
|
|
| `gfs_volume_size_in_gb` | Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks |
|
|
|
|
## Initializing Terraform
|
|
Before Terraform can operate on your cluster you need to install required
|
|
plugins. This is accomplished with the command
|
|
|
|
```bash
|
|
$ terraform init contrib/terraform/openstack
|
|
```
|
|
|
|
## Provisioning Cluster with Terraform
|
|
You can apply the terraform config to your cluster with the following command
|
|
issued from the project's root directory
|
|
```bash
|
|
$ terraform apply -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-terraform-vars.tfvars contrib/terraform/openstack
|
|
```
|
|
|
|
if you chose to create a bastion host, this script will create
|
|
`contrib/terraform/openstack/k8s-cluster.yml` with an ssh command for ansible to
|
|
be able to access your machines tunneling through the bastion's ip adress. If
|
|
you want to manually handle the ssh tunneling to these machines, please delete
|
|
or move that file. If you want to use this, just leave it there, as ansible will
|
|
pick it up automatically.
|
|
|
|
|
|
## Destroying Cluster with Terraform
|
|
You can destroy a config deployed to your cluster with the following command
|
|
issued from the project's root directory
|
|
```bash
|
|
$ terraform destroy -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-terraform-vars.tfvars contrib/terraform/openstack
|
|
```
|
|
|
|
## Debugging Cluster Provisioning
|
|
You can enable debugging output from Terraform by setting
|
|
`OS_DEBUG` to 1 and`TF_LOG` to`DEBUG` before runing the terraform command
|
|
|
|
|
|
# Running the Ansible Script
|
|
Ensure your local ssh-agent is running and your ssh key has been added. This
|
|
step is required by the terraform provisioner:
|
|
|
|
```
|
|
$ eval $(ssh-agent -s)
|
|
$ ssh-add ~/.ssh/id_rsa
|
|
```
|
|
|
|
|
|
Make sure you can connect to the hosts:
|
|
|
|
```
|
|
$ ansible -i contrib/terraform/openstack/hosts -m ping all
|
|
example-k8s_node-1 | SUCCESS => {
|
|
"changed": false,
|
|
"ping": "pong"
|
|
}
|
|
example-etcd-1 | SUCCESS => {
|
|
"changed": false,
|
|
"ping": "pong"
|
|
}
|
|
example-k8s-master-1 | SUCCESS => {
|
|
"changed": false,
|
|
"ping": "pong"
|
|
}
|
|
```
|
|
|
|
if you are deploying a system that needs bootstrapping, like Container Linux by
|
|
CoreOS, these might have a state`FAILED` due to Container Linux by CoreOS not
|
|
having python. As long as the state is not`UNREACHABLE`, this is fine.
|
|
|
|
if it fails try to connect manually via SSH ... it could be something as simple as a stale host key.
|
|
|
|
## Configure Cluster variables
|
|
|
|
Edit`inventory/group_vars/all.yml`:
|
|
- Set variable **bootstrap_os** according selected image
|
|
```
|
|
# Valid bootstrap options (required): ubuntu, coreos, centos, none
|
|
bootstrap_os: coreos
|
|
```
|
|
- **bin_dir**
|
|
```
|
|
# Directory where the binaries will be installed
|
|
# Default:
|
|
# bin_dir: /usr/local/bin
|
|
# For Container Linux by CoreOS:
|
|
bin_dir: /opt/bin
|
|
```
|
|
- and **cloud_provider**
|
|
```
|
|
cloud_provider: openstack
|
|
```
|
|
Edit`inventory/group_vars/k8s-cluster.yml`:
|
|
- Set variable **kube_network_plugin** according selected networking
|
|
```
|
|
# Choose network plugin (calico, weave or flannel)
|
|
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
|
|
kube_network_plugin: flannel
|
|
```
|
|
> flannel works out-of-the-box
|
|
|
|
> calico requires allowing service's and pod's subnets on according OpenStack Neutron ports
|
|
- Set variable **resolvconf_mode**
|
|
```
|
|
# Can be docker_dns, host_resolvconf or none
|
|
# Default:
|
|
# resolvconf_mode: docker_dns
|
|
# For Container Linux by CoreOS:
|
|
resolvconf_mode: host_resolvconf
|
|
```
|
|
|
|
For calico configure OpenStack Neutron ports: [OpenStack](/docs/openstack.md)
|
|
|
|
## Deploy kubernetes:
|
|
|
|
```
|
|
$ ansible-playbook --become -i contrib/terraform/openstack/hosts cluster.yml
|
|
```
|
|
|
|
## Set up local kubectl
|
|
1. Install kubectl on your workstation:
|
|
[Install and Set Up kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
|
|
2. Add route to internal IP of master node (if needed):
|
|
```
|
|
sudo route add [master-internal-ip] gw [router-ip]
|
|
```
|
|
or
|
|
```
|
|
sudo route add -net [internal-subnet]/24 gw [router-ip]
|
|
```
|
|
3. List Kubernetes certs&keys:
|
|
```
|
|
ssh [os-user]@[master-ip] sudo ls /etc/kubernetes/ssl/
|
|
```
|
|
4. Get admin's certs&key:
|
|
```
|
|
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-[cluster_name]-k8s-master-1-key.pem > admin-key.pem
|
|
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-[cluster_name]-k8s-master-1.pem > admin.pem
|
|
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/ca.pem > ca.pem
|
|
```
|
|
5. Configure kubectl:
|
|
```
|
|
kubectl config set-cluster default-cluster --server=https://[master-internal-ip]:6443 \
|
|
--certificate-authority=ca.pem
|
|
|
|
kubectl config set-credentials default-admin \
|
|
--certificate-authority=ca.pem \
|
|
--client-key=admin-key.pem \
|
|
--client-certificate=admin.pem
|
|
|
|
kubectl config set-context default-system --cluster=default-cluster --user=default-admin
|
|
kubectl config use-context default-system
|
|
```
|
|
7. Check it:
|
|
```
|
|
kubectl version
|
|
```
|
|
|
|
If you are using floating ip addresses then you may get this error:
|
|
```
|
|
Unable to connect to the server: x509: certificate is valid for 10.0.0.6, 10.0.0.6, 10.233.0.1, 127.0.0.1, not 132.249.238.25
|
|
```
|
|
|
|
You can tell kubectl to ignore this condition by adding the
|
|
`--insecure-skip-tls-verify` option.
|
|
|
|
## GlusterFS
|
|
GlusterFS is not deployed by the standard`cluster.yml` playbook, see the
|
|
[glusterfs playbook documentation](../../network-storage/glusterfs/README.md)
|
|
for instructions.
|
|
|
|
Basically you will install gluster as
|
|
```bash
|
|
$ ansible-playbook --become -i contrib/terraform/openstack/hosts ./contrib/network-storage/glusterfs/glusterfs.yml
|
|
```
|
|
|
|
|
|
# What's next
|
|
[Start Hello Kubernetes Service](https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/)
|