Merge pull request #2378 from dleske/reorg-inventory-for-opst

Update OpenStack contrib to use per-cluster inventory layout
This commit is contained in:
Spencer Smith 2018-03-09 15:21:21 -05:00 committed by GitHub
commit 2132ec0269
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
4 changed files with 177 additions and 101 deletions

View file

@ -17,32 +17,33 @@ to actually install kubernetes and stand up the cluster.
### Networking ### Networking
The configuration includes creating a private subnet with a router to the The configuration includes creating a private subnet with a router to the
external net. It will allocate floating-ips from a pool and assign them to the external net. It will allocate floating IPs from a pool and assign them to the
hosts where that makes sense. You have the option of creating bastion hosts hosts where that makes sense. You have the option of creating bastion hosts
inside the private subnet to access the nodes there. inside the private subnet to access the nodes there. Alternatively, a node with
a floating IP can be used as a jump host to nodes without.
### Kubernetes Nodes ### Kubernetes Nodes
You can create many different kubernetes topologies by setting the number of You can create many different kubernetes topologies by setting the number of
different classes of hosts. For each class there are options for allocating different classes of hosts. For each class there are options for allocating
floating ip addresses or not. floating IP addresses or not.
- Master Nodes with etcd - Master nodes with etcd
- Master nodes without etcd - Master nodes without etcd
- Standalone etcd hosts - Standalone etcd hosts
- Kubernetes worker nodes - Kubernetes worker nodes
Note that the ansible script will report an invalid configuration if you wind up Note that the Ansible script will report an invalid configuration if you wind up
with an even number of etcd instances since that is not a valid configuration. with an even number of etcd instances since that is not a valid configuration.
### GlusterFS ### GlusterFS
The terraform configuration supports provisioning of an optional GlusterFS The Terraform configuration supports provisioning of an optional GlusterFS
shared file system based on a separate set of VMs. To enable this, you need to shared file system based on a separate set of VMs. To enable this, you need to
specify specify:
- the number of gluster hosts - the number of Gluster hosts (minimum 2)
- Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks - Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks
- Other properties related to provisioning the hosts - Other properties related to provisioning the hosts
Even if you are using Container Linux by CoreOS for your cluster, you will still Even if you are using Container Linux by CoreOS for your cluster, you will still
need the GlusterFS VMs to be based on either Debian or RedHat based images, need the GlusterFS VMs to be based on either Debian or RedHat based images.
Container Linux by CoreOS cannot serve GlusterFS, but can connect to it through Container Linux by CoreOS cannot serve GlusterFS, but can connect to it through
binaries available on hyperkube v1.4.3_coreos.0 or higher. binaries available on hyperkube v1.4.3_coreos.0 or higher.
@ -50,9 +51,9 @@ binaries available on hyperkube v1.4.3_coreos.0 or higher.
- [Install Terraform](https://www.terraform.io/intro/getting-started/install.html) - [Install Terraform](https://www.terraform.io/intro/getting-started/install.html)
- [Install Ansible](http://docs.ansible.com/ansible/latest/intro_installation.html) - [Install Ansible](http://docs.ansible.com/ansible/latest/intro_installation.html)
- you already have a suitable OS image in glance - you already have a suitable OS image in Glance
- you already have a floating-ip pool created - you already have a floating IP pool created
- you have security-groups enabled - you have security groups enabled
- you have a pair of keys generated that can be used to secure the new hosts - you have a pair of keys generated that can be used to secure the new hosts
## Module Architecture ## Module Architecture
@ -67,7 +68,7 @@ any external references to the floating IP (e.g. DNS) that would otherwise have
to be updated. to be updated.
You can force your existing IPs by modifying the compute variables in You can force your existing IPs by modifying the compute variables in
`kubespray.tf` as `kubespray.tf` as follows:
``` ```
k8s_master_fips = ["151.101.129.67"] k8s_master_fips = ["151.101.129.67"]
@ -75,26 +76,38 @@ k8s_node_fips = ["151.101.129.68"]
``` ```
## Terraform ## Terraform
Terraform will be used to provision all of the OpenStack resources. It is also Terraform will be used to provision all of the OpenStack resources with base software as appropriate.
used to deploy and provision the software requirements.
### Prep ### Configuration
#### OpenStack #### Inventory files
Create an inventory directory for your cluster by copying the existing sample and linking the `hosts` script (used to build the inventory based on Terraform state):
```ShellSession
$ cp -LRp contrib/terraform/openstack/sample-inventory inventory/$CLUSTER
$ cd inventory/$CLUSTER
$ ln -s ../../contrib/terraform/openstack/hosts
```
This will be the base for subsequent Terraform commands.
#### OpenStack access and credentials
No provider variables are hardcoded inside `variables.tf` because Terraform No provider variables are hardcoded inside `variables.tf` because Terraform
supports various authentication method for OpenStack, between identity v2 and supports various authentication methods for OpenStack: the older script and
v3 API, `openrc` or `clouds.yaml`. environment method (using `openrc`) as well as a newer declarative method, and
different OpenStack environments may support Identity API version 2 or 3.
These are examples and may vary depending on your OpenStack cloud provider, These are examples and may vary depending on your OpenStack cloud provider,
for an exhaustive list on how to authenticate on OpenStack with Terraform for an exhaustive list on how to authenticate on OpenStack with Terraform
please read the [OpenStack provider documentation](https://www.terraform.io/docs/providers/openstack/). please read the [OpenStack provider documentation](https://www.terraform.io/docs/providers/openstack/).
##### Recommended method : clouds.yaml ##### Declarative method (recommended)
Newer recommended authentication method is to use a `clouds.yaml` file that can be store in : The recommended authentication method is to describe credentials in a YAML file `clouds.yaml` that can be stored in:
* `Current Directory` * the current directory
* `~/.config/openstack` * `~/.config/openstack`
* `/etc/openstack` * `/etc/openstack`
@ -122,10 +135,11 @@ the one you want to use with the environment variable `OS_CLOUD` :
export OS_CLOUD=mycloud export OS_CLOUD=mycloud
``` ```
##### Deprecated method : openrc ##### Openrc method (deprecated)
When using classic environment variables, Terraform uses default `OS_*` When using classic environment variables, Terraform uses default `OS_*`
environment variables : environment variables. A script suitable for your environment may be available
from Horizon under *Project* -> *Compute* -> *Access & Security* -> *API Access*.
With identity v2: With identity v2:
@ -180,14 +194,11 @@ unset OS_PROJECT_DOMAIN_ID
set OS_PROJECT_DOMAIN_NAME=Default set OS_PROJECT_DOMAIN_NAME=Default
``` ```
### Terraform Variables #### Cluster variables
The construction of the cluster is driven by values found in The construction of the cluster is driven by values found in
[variables.tf](variables.tf). [variables.tf](variables.tf).
The best way to set these values is to create a file in the project's root For your cluster, edit `inventory/$CLUSTER/cluster.tf`.
directory called something like`my-terraform-vars.tfvars`. Many of the
variables are obvious. Here is a summary of some of the more interesting
ones:
|Variable | Description | |Variable | Description |
|---------|-------------| |---------|-------------|
@ -208,9 +219,9 @@ ones:
|`number_of_gfs_nodes_no_floating_ip` | Number of gluster servers to provision. | |`number_of_gfs_nodes_no_floating_ip` | Number of gluster servers to provision. |
| `gfs_volume_size_in_gb` | Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks | | `gfs_volume_size_in_gb` | Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks |
### Terraform files #### Terraform state files
In the root folder, the following files might be created (either by Terraform In the cluster's inventory folder, the following files might be created (either by Terraform
or manually), to prevent you from pushing them accidentally they are in a or manually), to prevent you from pushing them accidentally they are in a
`.gitignore` file in the `terraform/openstack` directory : `.gitignore` file in the `terraform/openstack` directory :
@ -221,49 +232,61 @@ or manually), to prevent you from pushing them accidentally they are in a
You can still add them manually if you want to. You can still add them manually if you want to.
## Initializing Terraform ### Initialization
Before Terraform can operate on your cluster you need to install required Before Terraform can operate on your cluster you need to install the required
plugins. This is accomplished with the command plugins. This is accomplished as follows:
```bash ```ShellSession
$ terraform init contrib/terraform/openstack $ cd inventory/$CLUSTER
$ terraform init ../../contrib/terraform/openstack
``` ```
## Provisioning Cluster with Terraform This should finish fairly quickly telling you Terraform has successfully initialized and loaded necessary modules.
You can apply the terraform config to your cluster with the following command
issued from the project's root directory ### Provisioning cluster
```bash You can apply the Terraform configuration to your cluster with the following command
$ terraform apply -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-terraform-vars.tfvars contrib/terraform/openstack issued from your cluster's inventory directory (`inventory/$CLUSTER`):
```ShellSession
$ terraform apply -var-file=cluster.tf ../../contrib/terraform/openstack
``` ```
if you chose to create a bastion host, this script will create if you chose to create a bastion host, this script will create
`contrib/terraform/openstack/k8s-cluster.yml` with an ssh command for ansible to `contrib/terraform/openstack/k8s-cluster.yml` with an ssh command for Ansible to
be able to access your machines tunneling through the bastion's ip adress. If be able to access your machines tunneling through the bastion's IP address. If
you want to manually handle the ssh tunneling to these machines, please delete you want to manually handle the ssh tunneling to these machines, please delete
or move that file. If you want to use this, just leave it there, as ansible will or move that file. If you want to use this, just leave it there, as ansible will
pick it up automatically. pick it up automatically.
### Destroying cluster
You can destroy your new cluster with the following command issued from the cluster's inventory directory:
## Destroying Cluster with Terraform ```ShellSession
You can destroy a config deployed to your cluster with the following command $ terraform destroy -var-file=cluster.tf ../../contrib/terraform/openstack
issued from the project's root directory
```bash
$ terraform destroy -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-terraform-vars.tfvars contrib/terraform/openstack
``` ```
## Debugging Cluster Provisioning If you've started the Ansible run, it may also be a good idea to do some manual cleanup:
* remove SSH keys from the destroyed cluster from your `~/.ssh/known_hosts` file
* clean up any temporary cache files: `rm /tmp/$CLUSTER-*`
### Debugging
You can enable debugging output from Terraform by setting You can enable debugging output from Terraform by setting
`OS_DEBUG` to 1 and`TF_LOG` to`DEBUG` before runing the terraform command `OS_DEBUG` to 1 and`TF_LOG` to`DEBUG` before running the Terraform command.
## Terraform output ### Terraform output
Terraform can output useful values that need to be reused if you want to use Kubernetes OpenStack cloud provider with Neutron/Octavia LBaaS or Cinder persistent Volume provisioning: Terraform can output values that are useful for configure Neutron/Octavia LBaaS or Cinder persistent volume provisioning as part of your Kubernetes deployment:
- `private_subnet_id`: the subnet where your instances are running, maps to `openstack_lbaas_subnet_id` - `private_subnet_id`: the subnet where your instances are running is used for `openstack_lbaas_subnet_id`
- `floating_network_id`: the network_id where the floating IP are provisioned, maps to `openstack_lbaas_floating_network_id` - `floating_network_id`: the network_id where the floating IP are provisioned is used for `openstack_lbaas_floating_network_id`
## Ansible
### Node access
#### SSH
# Running the Ansible Script
Ensure your local ssh-agent is running and your ssh key has been added. This Ensure your local ssh-agent is running and your ssh key has been added. This
step is required by the terraform provisioner: step is required by the terraform provisioner:
@ -272,11 +295,22 @@ $ eval $(ssh-agent -s)
$ ssh-add ~/.ssh/id_rsa $ ssh-add ~/.ssh/id_rsa
``` ```
If you have deployed and destroyed a previous iteration of your cluster, you will need to clear out any stale keys from your SSH "known hosts" file ( `~/.ssh/known_hosts`).
Make sure you can connect to the hosts: #### Bastion host
If you are not using a bastion host, but not all of your nodes have floating IPs, create a file `inventory/$CLUSTER/group_vars/no-floating.yml` with the following content. Use one of your nodes with a floating IP (this should have been output at the end of the Terraform step) and the appropriate user for that OS, or if you have another jump host, use that.
``` ```
$ ansible -i contrib/terraform/openstack/hosts -m ping all ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -W %h:%p -q USER@MASTER_IP"'
```
#### Test access
Make sure you can connect to the hosts. Note that Container Linux by CoreOS will have a state `FAILED` due to Python not being present. This is okay, because Python will be installed during bootstrapping, so long as the hosts are not `UNREACHABLE`.
```
$ ansible -i inventory/$CLUSTER/hosts -m ping all
example-k8s_node-1 | SUCCESS => { example-k8s_node-1 | SUCCESS => {
"changed": false, "changed": false,
"ping": "pong" "ping": "pong"
@ -291,21 +325,17 @@ example-k8s-master-1 | SUCCESS => {
} }
``` ```
if you are deploying a system that needs bootstrapping, like Container Linux by If it fails try to connect manually via SSH. It could be something as simple as a stale host key.
CoreOS, these might have a state`FAILED` due to Container Linux by CoreOS not
having python. As long as the state is not`UNREACHABLE`, this is fine.
if it fails try to connect manually via SSH ... it could be something as simple as a stale host key. ### Configure cluster variables
## Configure Cluster variables Edit `inventory/$CLUSTER/group_vars/all.yml`:
- Set variable **bootstrap_os** appropriately for your desired image:
Edit `inventory/sample/group_vars/all.yml`:
- Set variable **bootstrap_os** according selected image
``` ```
# Valid bootstrap options (required): ubuntu, coreos, centos, none # Valid bootstrap options (required): ubuntu, coreos, centos, none
bootstrap_os: coreos bootstrap_os: coreos
``` ```
- **bin_dir** - **bin_dir**:
``` ```
# Directory where the binaries will be installed # Directory where the binaries will be installed
# Default: # Default:
@ -313,20 +343,19 @@ bootstrap_os: coreos
# For Container Linux by CoreOS: # For Container Linux by CoreOS:
bin_dir: /opt/bin bin_dir: /opt/bin
``` ```
- and **cloud_provider** - and **cloud_provider**:
``` ```
cloud_provider: openstack cloud_provider: openstack
``` ```
Edit `inventory/sample/group_vars/k8s-cluster.yml`: Edit `inventory/$CLUSTER/group_vars/k8s-cluster.yml`:
- Set variable **kube_network_plugin** according selected networking - Set variable **kube_network_plugin** to your desired networking plugin.
- **flannel** works out-of-the-box
- **calico** requires [configuring OpenStack Neutron ports](/docs/openstack.md) to allow service and pod subnets
``` ```
# Choose network plugin (calico, weave or flannel) # Choose network plugin (calico, weave or flannel)
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing # Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
kube_network_plugin: flannel kube_network_plugin: flannel
``` ```
> flannel works out-of-the-box
> calico requires allowing service's and pod's subnets on according OpenStack Neutron ports
- Set variable **resolvconf_mode** - Set variable **resolvconf_mode**
``` ```
# Can be docker_dns, host_resolvconf or none # Can be docker_dns, host_resolvconf or none
@ -336,18 +365,19 @@ kube_network_plugin: flannel
resolvconf_mode: host_resolvconf resolvconf_mode: host_resolvconf
``` ```
For calico configure OpenStack Neutron ports: [OpenStack](/docs/openstack.md) ### Deploy Kubernetes
## Deploy kubernetes:
``` ```
$ ansible-playbook --become -i contrib/terraform/openstack/hosts cluster.yml $ ansible-playbook --become -i inventory/$CLUSTER/hosts cluster.yml
``` ```
## Set up local kubectl This will take some time as there are many tasks to run.
1. Install kubectl on your workstation:
[Install and Set Up kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) ## Kubernetes
2. Add route to internal IP of master node (if needed):
### Set up kubectl
1. [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your workstation
2. Add a route to the internal IP of a master node (if needed):
``` ```
sudo route add [master-internal-ip] gw [router-ip] sudo route add [master-internal-ip] gw [router-ip]
``` ```
@ -355,28 +385,28 @@ or
``` ```
sudo route add -net [internal-subnet]/24 gw [router-ip] sudo route add -net [internal-subnet]/24 gw [router-ip]
``` ```
3. List Kubernetes certs&keys: 3. List Kubernetes certificates & keys:
``` ```
ssh [os-user]@[master-ip] sudo ls /etc/kubernetes/ssl/ ssh [os-user]@[master-ip] sudo ls /etc/kubernetes/ssl/
``` ```
4. Get admin's certs&key: 4. Get `admin`'s certificates and keys:
``` ```
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-[cluster_name]-k8s-master-1-key.pem > admin-key.pem ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-[cluster_name]-k8s-master-1-key.pem > admin-key.pem
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-[cluster_name]-k8s-master-1.pem > admin.pem ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-[cluster_name]-k8s-master-1.pem > admin.pem
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/ca.pem > ca.pem ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/ca.pem > ca.pem
``` ```
5. Configure kubectl: 5. Configure kubectl:
``` ```ShellSession
kubectl config set-cluster default-cluster --server=https://[master-internal-ip]:6443 \ $ kubectl config set-cluster default-cluster --server=https://[master-internal-ip]:6443 \
--certificate-authority=ca.pem --certificate-authority=ca.pem
kubectl config set-credentials default-admin \ $ kubectl config set-credentials default-admin \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \
--client-key=admin-key.pem \ --client-key=admin-key.pem \
--client-certificate=admin.pem --client-certificate=admin.pem
kubectl config set-context default-system --cluster=default-cluster --user=default-admin $ kubectl config set-context default-system --cluster=default-cluster --user=default-admin
kubectl config use-context default-system $ kubectl config use-context default-system
``` ```
7. Check it: 7. Check it:
``` ```
@ -393,14 +423,15 @@ You can tell kubectl to ignore this condition by adding the
## GlusterFS ## GlusterFS
GlusterFS is not deployed by the standard`cluster.yml` playbook, see the GlusterFS is not deployed by the standard`cluster.yml` playbook, see the
[glusterfs playbook documentation](../../network-storage/glusterfs/README.md) [GlusterFS playbook documentation](../../network-storage/glusterfs/README.md)
for instructions. for instructions.
Basically you will install gluster as Basically you will install Gluster as
```bash ```ShellSession
$ ansible-playbook --become -i contrib/terraform/openstack/hosts ./contrib/network-storage/glusterfs/glusterfs.yml $ ansible-playbook --become -i inventory/$CLUSTER/hosts ./contrib/network-storage/glusterfs/glusterfs.yml
``` ```
# What's next ## What's next
[Start Hello Kubernetes Service](https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/)
Try out your new Kubernetes cluster with the [Hello Kubernetes service](https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/).

View file

@ -1 +0,0 @@
../../../inventory/group_vars

View file

@ -0,0 +1,45 @@
# your Kubernetes cluster name here
cluster_name = "i-didnt-read-the-docs"
# SSH key to use for access to nodes
public_key_path = "~/.ssh/id_rsa.pub"
# image to use for bastion, masters, standalone etcd instances, and nodes
image = "<image name>"
# user on the node (ex. core on Container Linux, ubuntu on Ubuntu, etc.)
ssh_user = "<cloud-provisioned user>"
# 0|1 bastion nodes
number_of_bastions = 0
#flavor_bastion = "<UUID>"
# standalone etcds
number_of_etcd = 0
# masters
number_of_k8s_masters = 1
number_of_k8s_masters_no_etcd = 0
number_of_k8s_masters_no_floating_ip = 0
number_of_k8s_masters_no_floating_ip_no_etcd = 0
flavor_k8s_master = "<UUID>"
# nodes
number_of_k8s_nodes = 2
number_of_k8s_nodes_no_floating_ip = 4
#flavor_k8s_node = "<UUID>"
# GlusterFS
# either 0 or more than one
#number_of_gfs_nodes_no_floating_ip = 0
#gfs_volume_size_in_gb = 150
# Container Linux does not support GlusterFS
#image_gfs = "<image name>"
# May be different from other nodes
#ssh_user_gfs = "ubuntu"
#flavor_gfs_node = "<UUID>"
# networking
network_name = "<network>"
external_net = "<UUID>"
floatingip_pool = "<pool>"

View file

@ -0,0 +1 @@
../../../../inventory/sample/group_vars