Fix markdown failures on contrib/terraform (#7082)
This fixes markdown failures on contrib/terraform.
This commit is contained in:
parent
bbab1013c5
commit
dc86b2063a
4 changed files with 187 additions and 122 deletions
|
@ -67,7 +67,7 @@ markdownlint:
|
||||||
- npm install -g markdownlint-cli@0.22.0
|
- npm install -g markdownlint-cli@0.22.0
|
||||||
script:
|
script:
|
||||||
# TODO: Remove "grep -v" part to enable markdownlint for all md files
|
# TODO: Remove "grep -v" part to enable markdownlint for all md files
|
||||||
- markdownlint $(find . -name "*.md" | grep -v .github | grep -v roles | grep -v contrib/terraform) --ignore docs/_sidebar.md --ignore contrib/dind/README.md
|
- markdownlint $(find . -name "*.md" | grep -v .github | grep -v roles) --ignore docs/_sidebar.md --ignore contrib/dind/README.md
|
||||||
|
|
||||||
ci-matrix:
|
ci-matrix:
|
||||||
stage: unit-tests
|
stage: unit-tests
|
||||||
|
|
|
@ -1,40 +1,44 @@
|
||||||
## Kubernetes on AWS with Terraform
|
# Kubernetes on AWS with Terraform
|
||||||
|
|
||||||
**Overview:**
|
## Overview
|
||||||
|
|
||||||
This project will create:
|
This project will create:
|
||||||
* VPC with Public and Private Subnets in # Availability Zones
|
|
||||||
* Bastion Hosts and NAT Gateways in the Public Subnet
|
|
||||||
* A dynamic number of masters, etcd, and worker nodes in the Private Subnet
|
|
||||||
* even distributed over the # of Availability Zones
|
|
||||||
* AWS ELB in the Public Subnet for accessing the Kubernetes API from the internet
|
|
||||||
|
|
||||||
**Requirements**
|
- VPC with Public and Private Subnets in # Availability Zones
|
||||||
|
- Bastion Hosts and NAT Gateways in the Public Subnet
|
||||||
|
- A dynamic number of masters, etcd, and worker nodes in the Private Subnet
|
||||||
|
- even distributed over the # of Availability Zones
|
||||||
|
- AWS ELB in the Public Subnet for accessing the Kubernetes API from the internet
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
- Terraform 0.12.0 or newer
|
- Terraform 0.12.0 or newer
|
||||||
|
|
||||||
**How to Use:**
|
## How to Use
|
||||||
|
|
||||||
- Export the variables for your AWS credentials or edit `credentials.tfvars`:
|
- Export the variables for your AWS credentials or edit `credentials.tfvars`:
|
||||||
|
|
||||||
```
|
```commandline
|
||||||
export TF_VAR_AWS_ACCESS_KEY_ID="www"
|
export TF_VAR_AWS_ACCESS_KEY_ID="www"
|
||||||
export TF_VAR_AWS_SECRET_ACCESS_KEY ="xxx"
|
export TF_VAR_AWS_SECRET_ACCESS_KEY ="xxx"
|
||||||
export TF_VAR_AWS_SSH_KEY_NAME="yyy"
|
export TF_VAR_AWS_SSH_KEY_NAME="yyy"
|
||||||
export TF_VAR_AWS_DEFAULT_REGION="zzz"
|
export TF_VAR_AWS_DEFAULT_REGION="zzz"
|
||||||
```
|
```
|
||||||
|
|
||||||
- Update `contrib/terraform/aws/terraform.tfvars` with your data. By default, the Terraform scripts use Ubuntu 18.04 LTS (Bionic) as base image. If you want to change this behaviour, see note "Using other distrib than Ubuntu" below.
|
- Update `contrib/terraform/aws/terraform.tfvars` with your data. By default, the Terraform scripts use Ubuntu 18.04 LTS (Bionic) as base image. If you want to change this behaviour, see note "Using other distrib than Ubuntu" below.
|
||||||
- Create an AWS EC2 SSH Key
|
- Create an AWS EC2 SSH Key
|
||||||
- Run with `terraform apply --var-file="credentials.tfvars"` or `terraform apply` depending if you exported your AWS credentials
|
- Run with `terraform apply --var-file="credentials.tfvars"` or `terraform apply` depending if you exported your AWS credentials
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
|
||||||
```commandline
|
```commandline
|
||||||
terraform apply -var-file=credentials.tfvars
|
terraform apply -var-file=credentials.tfvars
|
||||||
```
|
```
|
||||||
|
|
||||||
- Terraform automatically creates an Ansible Inventory file called `hosts` with the created infrastructure in the directory `inventory`
|
- Terraform automatically creates an Ansible Inventory file called `hosts` with the created infrastructure in the directory `inventory`
|
||||||
|
|
||||||
- Ansible will automatically generate an ssh config file for your bastion hosts. To connect to hosts with ssh using bastion host use generated ssh-bastion.conf.
|
- Ansible will automatically generate an ssh config file for your bastion hosts. To connect to hosts with ssh using bastion host use generated ssh-bastion.conf.
|
||||||
Ansible automatically detects bastion and changes ssh_args
|
Ansible automatically detects bastion and changes ssh_args
|
||||||
|
|
||||||
```commandline
|
```commandline
|
||||||
ssh -F ./ssh-bastion.conf user@$ip
|
ssh -F ./ssh-bastion.conf user@$ip
|
||||||
```
|
```
|
||||||
|
@ -42,9 +46,11 @@ ssh -F ./ssh-bastion.conf user@$ip
|
||||||
- Once the infrastructure is created, you can run the kubespray playbooks and supply inventory/hosts with the `-i` flag.
|
- Once the infrastructure is created, you can run the kubespray playbooks and supply inventory/hosts with the `-i` flag.
|
||||||
|
|
||||||
Example (this one assumes you are using Ubuntu)
|
Example (this one assumes you are using Ubuntu)
|
||||||
|
|
||||||
```commandline
|
```commandline
|
||||||
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=ubuntu -b --become-user=root --flush-cache
|
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=ubuntu -b --become-user=root --flush-cache
|
||||||
```
|
```
|
||||||
|
|
||||||
***Using other distrib than Ubuntu***
|
***Using other distrib than Ubuntu***
|
||||||
If you want to use another distribution than Ubuntu 18.04 (Bionic) LTS, you can modify the search filters of the 'data "aws_ami" "distro"' in variables.tf.
|
If you want to use another distribution than Ubuntu 18.04 (Bionic) LTS, you can modify the search filters of the 'data "aws_ami" "distro"' in variables.tf.
|
||||||
|
|
||||||
|
@ -52,7 +58,7 @@ For example, to use:
|
||||||
|
|
||||||
- Debian Jessie, replace 'data "aws_ami" "distro"' in variables.tf with
|
- Debian Jessie, replace 'data "aws_ami" "distro"' in variables.tf with
|
||||||
|
|
||||||
```
|
```ini
|
||||||
data "aws_ami" "distro" {
|
data "aws_ami" "distro" {
|
||||||
most_recent = true
|
most_recent = true
|
||||||
|
|
||||||
|
@ -72,7 +78,7 @@ data "aws_ami" "distro" {
|
||||||
|
|
||||||
- Ubuntu 16.04, replace 'data "aws_ami" "distro"' in variables.tf with
|
- Ubuntu 16.04, replace 'data "aws_ami" "distro"' in variables.tf with
|
||||||
|
|
||||||
```
|
```ini
|
||||||
data "aws_ami" "distro" {
|
data "aws_ami" "distro" {
|
||||||
most_recent = true
|
most_recent = true
|
||||||
|
|
||||||
|
@ -92,7 +98,7 @@ data "aws_ami" "distro" {
|
||||||
|
|
||||||
- Centos 7, replace 'data "aws_ami" "distro"' in variables.tf with
|
- Centos 7, replace 'data "aws_ami" "distro"' in variables.tf with
|
||||||
|
|
||||||
```
|
```ini
|
||||||
data "aws_ami" "distro" {
|
data "aws_ami" "distro" {
|
||||||
most_recent = true
|
most_recent = true
|
||||||
|
|
||||||
|
@ -114,7 +120,7 @@ data "aws_ami" "distro" {
|
||||||
|
|
||||||
You can use the following set of commands to get the kubeconfig file from your newly created cluster. Before running the commands, make sure you are in the project's root folder.
|
You can use the following set of commands to get the kubeconfig file from your newly created cluster. Before running the commands, make sure you are in the project's root folder.
|
||||||
|
|
||||||
```
|
```commandline
|
||||||
# Get the controller's IP address.
|
# Get the controller's IP address.
|
||||||
CONTROLLER_HOST_NAME=$(cat ./inventory/hosts | grep "\[kube-master\]" -A 1 | tail -n 1)
|
CONTROLLER_HOST_NAME=$(cat ./inventory/hosts | grep "\[kube-master\]" -A 1 | tail -n 1)
|
||||||
CONTROLLER_IP=$(cat ./inventory/hosts | grep $CONTROLLER_HOST_NAME | grep ansible_host | cut -d'=' -f2)
|
CONTROLLER_IP=$(cat ./inventory/hosts | grep $CONTROLLER_HOST_NAME | grep ansible_host | cut -d'=' -f2)
|
||||||
|
@ -134,22 +140,23 @@ sed -i "s^server:.*^server: https://$LB_HOST:6443^" ~/.kube/config
|
||||||
kubectl get nodes
|
kubectl get nodes
|
||||||
```
|
```
|
||||||
|
|
||||||
**Troubleshooting**
|
## Troubleshooting
|
||||||
|
|
||||||
***Remaining AWS IAM Instance Profile***:
|
### Remaining AWS IAM Instance Profile
|
||||||
|
|
||||||
If the cluster was destroyed without using Terraform it is possible that
|
If the cluster was destroyed without using Terraform it is possible that
|
||||||
the AWS IAM Instance Profiles still remain. To delete them you can use
|
the AWS IAM Instance Profiles still remain. To delete them you can use
|
||||||
the `AWS CLI` with the following command:
|
the `AWS CLI` with the following command:
|
||||||
```
|
|
||||||
|
```commandline
|
||||||
aws iam delete-instance-profile --region <region_name> --instance-profile-name <profile_name>
|
aws iam delete-instance-profile --region <region_name> --instance-profile-name <profile_name>
|
||||||
```
|
```
|
||||||
|
|
||||||
***Ansible Inventory doesn't get created:***
|
### Ansible Inventory doesn't get created
|
||||||
|
|
||||||
It could happen that Terraform doesn't create an Ansible Inventory file automatically. If this is the case copy the output after `inventory=` and create a file named `hosts`in the directory `inventory` and paste the inventory into the file.
|
It could happen that Terraform doesn't create an Ansible Inventory file automatically. If this is the case copy the output after `inventory=` and create a file named `hosts`in the directory `inventory` and paste the inventory into the file.
|
||||||
|
|
||||||
**Architecture**
|
## Architecture
|
||||||
|
|
||||||
Pictured is an AWS Infrastructure created with this Terraform project distributed over two Availability Zones.
|
Pictured is an AWS Infrastructure created with this Terraform project distributed over two Availability Zones.
|
||||||
|
|
||||||
|
|
|
@ -9,6 +9,7 @@ This will install a Kubernetes cluster on an OpenStack Cloud. It should work on
|
||||||
most modern installs of OpenStack that support the basic services.
|
most modern installs of OpenStack that support the basic services.
|
||||||
|
|
||||||
### Known compatible public clouds
|
### Known compatible public clouds
|
||||||
|
|
||||||
- [Auro](https://auro.io/)
|
- [Auro](https://auro.io/)
|
||||||
- [Betacloud](https://www.betacloud.io/)
|
- [Betacloud](https://www.betacloud.io/)
|
||||||
- [CityCloud](https://www.citycloud.com/)
|
- [CityCloud](https://www.citycloud.com/)
|
||||||
|
@ -23,8 +24,8 @@ most modern installs of OpenStack that support the basic services.
|
||||||
- [VexxHost](https://vexxhost.com/)
|
- [VexxHost](https://vexxhost.com/)
|
||||||
- [Zetta](https://www.zetta.io/)
|
- [Zetta](https://www.zetta.io/)
|
||||||
|
|
||||||
|
|
||||||
## Approach
|
## Approach
|
||||||
|
|
||||||
The terraform configuration inspects variables found in
|
The terraform configuration inspects variables found in
|
||||||
[variables.tf](variables.tf) to create resources in your OpenStack cluster.
|
[variables.tf](variables.tf) to create resources in your OpenStack cluster.
|
||||||
There is a [python script](../terraform.py) that reads the generated`.tfstate`
|
There is a [python script](../terraform.py) that reads the generated`.tfstate`
|
||||||
|
@ -32,6 +33,7 @@ file to generate a dynamic inventory that is consumed by the main ansible script
|
||||||
to actually install kubernetes and stand up the cluster.
|
to actually install kubernetes and stand up the cluster.
|
||||||
|
|
||||||
### Networking
|
### Networking
|
||||||
|
|
||||||
The configuration includes creating a private subnet with a router to the
|
The configuration includes creating a private subnet with a router to the
|
||||||
external net. It will allocate floating IPs from a pool and assign them to the
|
external net. It will allocate floating IPs from a pool and assign them to the
|
||||||
hosts where that makes sense. You have the option of creating bastion hosts
|
hosts where that makes sense. You have the option of creating bastion hosts
|
||||||
|
@ -39,19 +41,23 @@ inside the private subnet to access the nodes there. Alternatively, a node with
|
||||||
a floating IP can be used as a jump host to nodes without.
|
a floating IP can be used as a jump host to nodes without.
|
||||||
|
|
||||||
#### Using an existing router
|
#### Using an existing router
|
||||||
|
|
||||||
It is possible to use an existing router instead of creating one. To use an
|
It is possible to use an existing router instead of creating one. To use an
|
||||||
existing router set the router\_id variable to the uuid of the router you wish
|
existing router set the router\_id variable to the uuid of the router you wish
|
||||||
to use.
|
to use.
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
```
|
|
||||||
|
```ShellSession
|
||||||
router_id = "00c542e7-6f46-4535-ae95-984c7f0391a3"
|
router_id = "00c542e7-6f46-4535-ae95-984c7f0391a3"
|
||||||
```
|
```
|
||||||
|
|
||||||
### Kubernetes Nodes
|
### Kubernetes Nodes
|
||||||
|
|
||||||
You can create many different kubernetes topologies by setting the number of
|
You can create many different kubernetes topologies by setting the number of
|
||||||
different classes of hosts. For each class there are options for allocating
|
different classes of hosts. For each class there are options for allocating
|
||||||
floating IP addresses or not.
|
floating IP addresses or not.
|
||||||
|
|
||||||
- Master nodes with etcd
|
- Master nodes with etcd
|
||||||
- Master nodes without etcd
|
- Master nodes without etcd
|
||||||
- Standalone etcd hosts
|
- Standalone etcd hosts
|
||||||
|
@ -64,10 +70,12 @@ master nodes with etcd replicas. As an example, if you have three master nodes w
|
||||||
etcd replicas and three standalone etcd nodes, the script will fail since there are
|
etcd replicas and three standalone etcd nodes, the script will fail since there are
|
||||||
now six total etcd replicas.
|
now six total etcd replicas.
|
||||||
|
|
||||||
### GlusterFS
|
### GlusterFS shared file system
|
||||||
|
|
||||||
The Terraform configuration supports provisioning of an optional GlusterFS
|
The Terraform configuration supports provisioning of an optional GlusterFS
|
||||||
shared file system based on a separate set of VMs. To enable this, you need to
|
shared file system based on a separate set of VMs. To enable this, you need to
|
||||||
specify:
|
specify:
|
||||||
|
|
||||||
- the number of Gluster hosts (minimum 2)
|
- the number of Gluster hosts (minimum 2)
|
||||||
- Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks
|
- Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks
|
||||||
- Other properties related to provisioning the hosts
|
- Other properties related to provisioning the hosts
|
||||||
|
@ -87,7 +95,9 @@ binaries available on hyperkube v1.4.3_coreos.0 or higher.
|
||||||
- you have a pair of keys generated that can be used to secure the new hosts
|
- you have a pair of keys generated that can be used to secure the new hosts
|
||||||
|
|
||||||
## Module Architecture
|
## Module Architecture
|
||||||
|
|
||||||
The configuration is divided into three modules:
|
The configuration is divided into three modules:
|
||||||
|
|
||||||
- Network
|
- Network
|
||||||
- IPs
|
- IPs
|
||||||
- Compute
|
- Compute
|
||||||
|
@ -100,12 +110,13 @@ to be updated.
|
||||||
You can force your existing IPs by modifying the compute variables in
|
You can force your existing IPs by modifying the compute variables in
|
||||||
`kubespray.tf` as follows:
|
`kubespray.tf` as follows:
|
||||||
|
|
||||||
```
|
```ini
|
||||||
k8s_master_fips = ["151.101.129.67"]
|
k8s_master_fips = ["151.101.129.67"]
|
||||||
k8s_node_fips = ["151.101.129.68"]
|
k8s_node_fips = ["151.101.129.68"]
|
||||||
```
|
```
|
||||||
|
|
||||||
## Terraform
|
## Terraform
|
||||||
|
|
||||||
Terraform will be used to provision all of the OpenStack resources with base software as appropriate.
|
Terraform will be used to provision all of the OpenStack resources with base software as appropriate.
|
||||||
|
|
||||||
### Configuration
|
### Configuration
|
||||||
|
@ -115,10 +126,10 @@ Terraform will be used to provision all of the OpenStack resources with base sof
|
||||||
Create an inventory directory for your cluster by copying the existing sample and linking the `hosts` script (used to build the inventory based on Terraform state):
|
Create an inventory directory for your cluster by copying the existing sample and linking the `hosts` script (used to build the inventory based on Terraform state):
|
||||||
|
|
||||||
```ShellSession
|
```ShellSession
|
||||||
$ cp -LRp contrib/terraform/openstack/sample-inventory inventory/$CLUSTER
|
cp -LRp contrib/terraform/openstack/sample-inventory inventory/$CLUSTER
|
||||||
$ cd inventory/$CLUSTER
|
cd inventory/$CLUSTER
|
||||||
$ ln -s ../../contrib/terraform/openstack/hosts
|
ln -s ../../contrib/terraform/openstack/hosts
|
||||||
$ ln -s ../../contrib
|
ln -s ../../contrib
|
||||||
```
|
```
|
||||||
|
|
||||||
This will be the base for subsequent Terraform commands.
|
This will be the base for subsequent Terraform commands.
|
||||||
|
@ -138,13 +149,13 @@ please read the [OpenStack provider documentation](https://www.terraform.io/docs
|
||||||
|
|
||||||
The recommended authentication method is to describe credentials in a YAML file `clouds.yaml` that can be stored in:
|
The recommended authentication method is to describe credentials in a YAML file `clouds.yaml` that can be stored in:
|
||||||
|
|
||||||
* the current directory
|
- the current directory
|
||||||
* `~/.config/openstack`
|
- `~/.config/openstack`
|
||||||
* `/etc/openstack`
|
- `/etc/openstack`
|
||||||
|
|
||||||
`clouds.yaml`:
|
`clouds.yaml`:
|
||||||
|
|
||||||
```
|
```yaml
|
||||||
clouds:
|
clouds:
|
||||||
mycloud:
|
mycloud:
|
||||||
auth:
|
auth:
|
||||||
|
@ -162,7 +173,7 @@ clouds:
|
||||||
If you have multiple clouds defined in your `clouds.yaml` file you can choose
|
If you have multiple clouds defined in your `clouds.yaml` file you can choose
|
||||||
the one you want to use with the environment variable `OS_CLOUD`:
|
the one you want to use with the environment variable `OS_CLOUD`:
|
||||||
|
|
||||||
```
|
```ShellSession
|
||||||
export OS_CLOUD=mycloud
|
export OS_CLOUD=mycloud
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -174,7 +185,7 @@ from Horizon under *Project* -> *Compute* -> *Access & Security* -> *API Access*
|
||||||
|
|
||||||
With identity v2:
|
With identity v2:
|
||||||
|
|
||||||
```
|
```ShellSession
|
||||||
source openrc
|
source openrc
|
||||||
|
|
||||||
env | grep OS
|
env | grep OS
|
||||||
|
@ -191,7 +202,7 @@ OS_IDENTITY_API_VERSION=2
|
||||||
|
|
||||||
With identity v3:
|
With identity v3:
|
||||||
|
|
||||||
```
|
```ShellSession
|
||||||
source openrc
|
source openrc
|
||||||
|
|
||||||
env | grep OS
|
env | grep OS
|
||||||
|
@ -208,24 +219,24 @@ OS_IDENTITY_API_VERSION=3
|
||||||
OS_USER_DOMAIN_NAME=Default
|
OS_USER_DOMAIN_NAME=Default
|
||||||
```
|
```
|
||||||
|
|
||||||
Terraform does not support a mix of DomainName and DomainID, choose one or the
|
Terraform does not support a mix of DomainName and DomainID, choose one or the other:
|
||||||
other:
|
|
||||||
|
|
||||||
```
|
- provider.openstack: You must provide exactly one of DomainID or DomainName to authenticate by Username
|
||||||
* provider.openstack: You must provide exactly one of DomainID or DomainName to authenticate by Username
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
```ShellSession
|
||||||
unset OS_USER_DOMAIN_NAME
|
unset OS_USER_DOMAIN_NAME
|
||||||
export OS_USER_DOMAIN_ID=default
|
export OS_USER_DOMAIN_ID=default
|
||||||
|
```
|
||||||
|
|
||||||
or
|
or
|
||||||
|
|
||||||
|
```ShellSession
|
||||||
unset OS_PROJECT_DOMAIN_ID
|
unset OS_PROJECT_DOMAIN_ID
|
||||||
set OS_PROJECT_DOMAIN_NAME=Default
|
set OS_PROJECT_DOMAIN_NAME=Default
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Cluster variables
|
#### Cluster variables
|
||||||
|
|
||||||
The construction of the cluster is driven by values found in
|
The construction of the cluster is driven by values found in
|
||||||
[variables.tf](variables.tf).
|
[variables.tf](variables.tf).
|
||||||
|
|
||||||
|
@ -269,13 +280,15 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|
||||||
|`k8s_nodes` | Map containing worker node definition, see explanation below |
|
|`k8s_nodes` | Map containing worker node definition, see explanation below |
|
||||||
|
|
||||||
##### k8s_nodes
|
##### k8s_nodes
|
||||||
|
|
||||||
Allows a custom defintion of worker nodes giving the operator full control over individual node flavor and
|
Allows a custom defintion of worker nodes giving the operator full control over individual node flavor and
|
||||||
availability zone placement. To enable the use of this mode set the `number_of_k8s_nodes` and
|
availability zone placement. To enable the use of this mode set the `number_of_k8s_nodes` and
|
||||||
`number_of_k8s_nodes_no_floating_ip` variables to 0. Then define your desired worker node configuration
|
`number_of_k8s_nodes_no_floating_ip` variables to 0. Then define your desired worker node configuration
|
||||||
using the `k8s_nodes` variable.
|
using the `k8s_nodes` variable.
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
```
|
|
||||||
|
```ini
|
||||||
k8s_nodes = {
|
k8s_nodes = {
|
||||||
"1" = {
|
"1" = {
|
||||||
"az" = "sto1"
|
"az" = "sto1"
|
||||||
|
@ -296,14 +309,16 @@ k8s_nodes = {
|
||||||
```
|
```
|
||||||
|
|
||||||
Would result in the same configuration as:
|
Would result in the same configuration as:
|
||||||
```
|
|
||||||
|
```ini
|
||||||
number_of_k8s_nodes = 3
|
number_of_k8s_nodes = 3
|
||||||
flavor_k8s_node = "83d8b44a-26a0-4f02-a981-079446926445"
|
flavor_k8s_node = "83d8b44a-26a0-4f02-a981-079446926445"
|
||||||
az_list = ["sto1", "sto2", "sto3"]
|
az_list = ["sto1", "sto2", "sto3"]
|
||||||
```
|
```
|
||||||
|
|
||||||
And:
|
And:
|
||||||
```
|
|
||||||
|
```ini
|
||||||
k8s_nodes = {
|
k8s_nodes = {
|
||||||
"ing-1" = {
|
"ing-1" = {
|
||||||
"az" = "sto1"
|
"az" = "sto1"
|
||||||
|
@ -357,7 +372,8 @@ Would result in three nodes in each availability zone each with their own separa
|
||||||
flavor and floating ip configuration.
|
flavor and floating ip configuration.
|
||||||
|
|
||||||
The "schema":
|
The "schema":
|
||||||
```
|
|
||||||
|
```ini
|
||||||
k8s_nodes = {
|
k8s_nodes = {
|
||||||
"key | node name suffix, must be unique" = {
|
"key | node name suffix, must be unique" = {
|
||||||
"az" = string
|
"az" = string
|
||||||
|
@ -366,6 +382,7 @@ k8s_nodes = {
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
All values are required.
|
All values are required.
|
||||||
|
|
||||||
#### Terraform state files
|
#### Terraform state files
|
||||||
|
@ -374,10 +391,10 @@ In the cluster's inventory folder, the following files might be created (either
|
||||||
or manually), to prevent you from pushing them accidentally they are in a
|
or manually), to prevent you from pushing them accidentally they are in a
|
||||||
`.gitignore` file in the `terraform/openstack` directory :
|
`.gitignore` file in the `terraform/openstack` directory :
|
||||||
|
|
||||||
* `.terraform`
|
- `.terraform`
|
||||||
* `.tfvars`
|
- `.tfvars`
|
||||||
* `.tfstate`
|
- `.tfstate`
|
||||||
* `.tfstate.backup`
|
- `.tfstate.backup`
|
||||||
|
|
||||||
You can still add them manually if you want to.
|
You can still add them manually if you want to.
|
||||||
|
|
||||||
|
@ -387,17 +404,19 @@ Before Terraform can operate on your cluster you need to install the required
|
||||||
plugins. This is accomplished as follows:
|
plugins. This is accomplished as follows:
|
||||||
|
|
||||||
```ShellSession
|
```ShellSession
|
||||||
$ cd inventory/$CLUSTER
|
cd inventory/$CLUSTER
|
||||||
$ terraform init ../../contrib/terraform/openstack
|
terraform init ../../contrib/terraform/openstack
|
||||||
```
|
```
|
||||||
|
|
||||||
This should finish fairly quickly telling you Terraform has successfully initialized and loaded necessary modules.
|
This should finish fairly quickly telling you Terraform has successfully initialized and loaded necessary modules.
|
||||||
|
|
||||||
### Provisioning cluster
|
### Provisioning cluster
|
||||||
|
|
||||||
You can apply the Terraform configuration to your cluster with the following command
|
You can apply the Terraform configuration to your cluster with the following command
|
||||||
issued from your cluster's inventory directory (`inventory/$CLUSTER`):
|
issued from your cluster's inventory directory (`inventory/$CLUSTER`):
|
||||||
|
|
||||||
```ShellSession
|
```ShellSession
|
||||||
$ terraform apply -var-file=cluster.tfvars ../../contrib/terraform/openstack
|
terraform apply -var-file=cluster.tfvars ../../contrib/terraform/openstack
|
||||||
```
|
```
|
||||||
|
|
||||||
if you chose to create a bastion host, this script will create
|
if you chose to create a bastion host, this script will create
|
||||||
|
@ -408,18 +427,20 @@ or move that file. If you want to use this, just leave it there, as ansible will
|
||||||
pick it up automatically.
|
pick it up automatically.
|
||||||
|
|
||||||
### Destroying cluster
|
### Destroying cluster
|
||||||
|
|
||||||
You can destroy your new cluster with the following command issued from the cluster's inventory directory:
|
You can destroy your new cluster with the following command issued from the cluster's inventory directory:
|
||||||
|
|
||||||
```ShellSession
|
```ShellSession
|
||||||
$ terraform destroy -var-file=cluster.tfvars ../../contrib/terraform/openstack
|
terraform destroy -var-file=cluster.tfvars ../../contrib/terraform/openstack
|
||||||
```
|
```
|
||||||
|
|
||||||
If you've started the Ansible run, it may also be a good idea to do some manual cleanup:
|
If you've started the Ansible run, it may also be a good idea to do some manual cleanup:
|
||||||
|
|
||||||
* remove SSH keys from the destroyed cluster from your `~/.ssh/known_hosts` file
|
- remove SSH keys from the destroyed cluster from your `~/.ssh/known_hosts` file
|
||||||
* clean up any temporary cache files: `rm /tmp/$CLUSTER-*`
|
- clean up any temporary cache files: `rm /tmp/$CLUSTER-*`
|
||||||
|
|
||||||
### Debugging
|
### Debugging
|
||||||
|
|
||||||
You can enable debugging output from Terraform by setting
|
You can enable debugging output from Terraform by setting
|
||||||
`OS_DEBUG` to 1 and`TF_LOG` to`DEBUG` before running the Terraform command.
|
`OS_DEBUG` to 1 and`TF_LOG` to`DEBUG` before running the Terraform command.
|
||||||
|
|
||||||
|
@ -427,8 +448,8 @@ You can enable debugging output from Terraform by setting
|
||||||
|
|
||||||
Terraform can output values that are useful for configure Neutron/Octavia LBaaS or Cinder persistent volume provisioning as part of your Kubernetes deployment:
|
Terraform can output values that are useful for configure Neutron/Octavia LBaaS or Cinder persistent volume provisioning as part of your Kubernetes deployment:
|
||||||
|
|
||||||
- `private_subnet_id`: the subnet where your instances are running is used for `openstack_lbaas_subnet_id`
|
- `private_subnet_id`: the subnet where your instances are running is used for `openstack_lbaas_subnet_id`
|
||||||
- `floating_network_id`: the network_id where the floating IP are provisioned is used for `openstack_lbaas_floating_network_id`
|
- `floating_network_id`: the network_id where the floating IP are provisioned is used for `openstack_lbaas_floating_network_id`
|
||||||
|
|
||||||
## Ansible
|
## Ansible
|
||||||
|
|
||||||
|
@ -439,9 +460,9 @@ Terraform can output values that are useful for configure Neutron/Octavia LBaaS
|
||||||
Ensure your local ssh-agent is running and your ssh key has been added. This
|
Ensure your local ssh-agent is running and your ssh key has been added. This
|
||||||
step is required by the terraform provisioner:
|
step is required by the terraform provisioner:
|
||||||
|
|
||||||
```
|
```ShellSession
|
||||||
$ eval $(ssh-agent -s)
|
eval $(ssh-agent -s)
|
||||||
$ ssh-add ~/.ssh/id_rsa
|
ssh-add ~/.ssh/id_rsa
|
||||||
```
|
```
|
||||||
|
|
||||||
If you have deployed and destroyed a previous iteration of your cluster, you will need to clear out any stale keys from your SSH "known hosts" file ( `~/.ssh/known_hosts`).
|
If you have deployed and destroyed a previous iteration of your cluster, you will need to clear out any stale keys from your SSH "known hosts" file ( `~/.ssh/known_hosts`).
|
||||||
|
@ -453,13 +474,13 @@ generated`.tfstate` file to generate a dynamic inventory recognizes
|
||||||
some variables within a "metadata" block, defined in a "resource"
|
some variables within a "metadata" block, defined in a "resource"
|
||||||
block (example):
|
block (example):
|
||||||
|
|
||||||
```
|
```ini
|
||||||
resource "openstack_compute_instance_v2" "example" {
|
resource "openstack_compute_instance_v2" "example" {
|
||||||
...
|
...
|
||||||
metadata {
|
metadata {
|
||||||
ssh_user = "ubuntu"
|
ssh_user = "ubuntu"
|
||||||
prefer_ipv6 = true
|
prefer_ipv6 = true
|
||||||
python_bin = "/usr/bin/python3"
|
python_bin = "/usr/bin/python3"
|
||||||
}
|
}
|
||||||
...
|
...
|
||||||
}
|
}
|
||||||
|
@ -474,8 +495,8 @@ instance should be preferred over IPv4.
|
||||||
|
|
||||||
Bastion access will be determined by:
|
Bastion access will be determined by:
|
||||||
|
|
||||||
- Your choice on the amount of bastion hosts (set by `number_of_bastions` terraform variable).
|
- Your choice on the amount of bastion hosts (set by `number_of_bastions` terraform variable).
|
||||||
- The existence of nodes/masters with floating IPs (set by `number_of_k8s_masters`, `number_of_k8s_nodes`, `number_of_k8s_masters_no_etcd` terraform variables).
|
- The existence of nodes/masters with floating IPs (set by `number_of_k8s_masters`, `number_of_k8s_nodes`, `number_of_k8s_masters_no_etcd` terraform variables).
|
||||||
|
|
||||||
If you have a bastion host, your ssh traffic will be directly routed through it. This is regardless of whether you have masters/nodes with a floating IP assigned.
|
If you have a bastion host, your ssh traffic will be directly routed through it. This is regardless of whether you have masters/nodes with a floating IP assigned.
|
||||||
If you don't have a bastion host, but at least one of your masters/nodes have a floating IP, then ssh traffic will be tunneled by one of these machines.
|
If you don't have a bastion host, but at least one of your masters/nodes have a floating IP, then ssh traffic will be tunneled by one of these machines.
|
||||||
|
@ -486,7 +507,7 @@ So, either a bastion host, or at least master/node with a floating IP are requir
|
||||||
|
|
||||||
Make sure you can connect to the hosts. Note that Flatcar Container Linux by Kinvolk will have a state `FAILED` due to Python not being present. This is okay, because Python will be installed during bootstrapping, so long as the hosts are not `UNREACHABLE`.
|
Make sure you can connect to the hosts. Note that Flatcar Container Linux by Kinvolk will have a state `FAILED` due to Python not being present. This is okay, because Python will be installed during bootstrapping, so long as the hosts are not `UNREACHABLE`.
|
||||||
|
|
||||||
```
|
```ShellSession
|
||||||
$ ansible -i inventory/$CLUSTER/hosts -m ping all
|
$ ansible -i inventory/$CLUSTER/hosts -m ping all
|
||||||
example-k8s_node-1 | SUCCESS => {
|
example-k8s_node-1 | SUCCESS => {
|
||||||
"changed": false,
|
"changed": false,
|
||||||
|
@ -507,44 +528,55 @@ If it fails try to connect manually via SSH. It could be something as simple as
|
||||||
### Configure cluster variables
|
### Configure cluster variables
|
||||||
|
|
||||||
Edit `inventory/$CLUSTER/group_vars/all/all.yml`:
|
Edit `inventory/$CLUSTER/group_vars/all/all.yml`:
|
||||||
|
|
||||||
- **bin_dir**:
|
- **bin_dir**:
|
||||||
```
|
|
||||||
|
```yml
|
||||||
# Directory where the binaries will be installed
|
# Directory where the binaries will be installed
|
||||||
# Default:
|
# Default:
|
||||||
# bin_dir: /usr/local/bin
|
# bin_dir: /usr/local/bin
|
||||||
# For Flatcar Container Linux by Kinvolk:
|
# For Flatcar Container Linux by Kinvolk:
|
||||||
bin_dir: /opt/bin
|
bin_dir: /opt/bin
|
||||||
```
|
```
|
||||||
|
|
||||||
- and **cloud_provider**:
|
- and **cloud_provider**:
|
||||||
```
|
|
||||||
|
```yml
|
||||||
cloud_provider: openstack
|
cloud_provider: openstack
|
||||||
```
|
```
|
||||||
|
|
||||||
Edit `inventory/$CLUSTER/group_vars/k8s-cluster/k8s-cluster.yml`:
|
Edit `inventory/$CLUSTER/group_vars/k8s-cluster/k8s-cluster.yml`:
|
||||||
|
|
||||||
- Set variable **kube_network_plugin** to your desired networking plugin.
|
- Set variable **kube_network_plugin** to your desired networking plugin.
|
||||||
- **flannel** works out-of-the-box
|
- **flannel** works out-of-the-box
|
||||||
- **calico** requires [configuring OpenStack Neutron ports](/docs/openstack.md) to allow service and pod subnets
|
- **calico** requires [configuring OpenStack Neutron ports](/docs/openstack.md) to allow service and pod subnets
|
||||||
```
|
|
||||||
|
```yml
|
||||||
# Choose network plugin (calico, weave or flannel)
|
# Choose network plugin (calico, weave or flannel)
|
||||||
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
|
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
|
||||||
kube_network_plugin: flannel
|
kube_network_plugin: flannel
|
||||||
```
|
```
|
||||||
|
|
||||||
- Set variable **resolvconf_mode**
|
- Set variable **resolvconf_mode**
|
||||||
```
|
|
||||||
|
```yml
|
||||||
# Can be docker_dns, host_resolvconf or none
|
# Can be docker_dns, host_resolvconf or none
|
||||||
# Default:
|
# Default:
|
||||||
# resolvconf_mode: docker_dns
|
# resolvconf_mode: docker_dns
|
||||||
# For Flatcar Container Linux by Kinvolk:
|
# For Flatcar Container Linux by Kinvolk:
|
||||||
resolvconf_mode: host_resolvconf
|
resolvconf_mode: host_resolvconf
|
||||||
```
|
```
|
||||||
|
|
||||||
- Set max amount of attached cinder volume per host (default 256)
|
- Set max amount of attached cinder volume per host (default 256)
|
||||||
```
|
|
||||||
|
```yml
|
||||||
node_volume_attach_limit: 26
|
node_volume_attach_limit: 26
|
||||||
```
|
```
|
||||||
|
|
||||||
### Deploy Kubernetes
|
### Deploy Kubernetes
|
||||||
|
|
||||||
```
|
```ShellSession
|
||||||
$ ansible-playbook --become -i inventory/$CLUSTER/hosts cluster.yml
|
ansible-playbook --become -i inventory/$CLUSTER/hosts cluster.yml
|
||||||
```
|
```
|
||||||
|
|
||||||
This will take some time as there are many tasks to run.
|
This will take some time as there are many tasks to run.
|
||||||
|
@ -552,26 +584,36 @@ This will take some time as there are many tasks to run.
|
||||||
## Kubernetes
|
## Kubernetes
|
||||||
|
|
||||||
### Set up kubectl
|
### Set up kubectl
|
||||||
|
|
||||||
1. [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your workstation
|
1. [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your workstation
|
||||||
2. Add a route to the internal IP of a master node (if needed):
|
2. Add a route to the internal IP of a master node (if needed):
|
||||||
```
|
|
||||||
|
```ShellSession
|
||||||
sudo route add [master-internal-ip] gw [router-ip]
|
sudo route add [master-internal-ip] gw [router-ip]
|
||||||
```
|
```
|
||||||
|
|
||||||
or
|
or
|
||||||
```
|
|
||||||
|
```ShellSession
|
||||||
sudo route add -net [internal-subnet]/24 gw [router-ip]
|
sudo route add -net [internal-subnet]/24 gw [router-ip]
|
||||||
```
|
```
|
||||||
3. List Kubernetes certificates & keys:
|
|
||||||
```
|
1. List Kubernetes certificates & keys:
|
||||||
|
|
||||||
|
```ShellSession
|
||||||
ssh [os-user]@[master-ip] sudo ls /etc/kubernetes/ssl/
|
ssh [os-user]@[master-ip] sudo ls /etc/kubernetes/ssl/
|
||||||
```
|
```
|
||||||
4. Get `admin`'s certificates and keys:
|
|
||||||
```
|
1. Get `admin`'s certificates and keys:
|
||||||
|
|
||||||
|
```ShellSession
|
||||||
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-kube-master-1-key.pem > admin-key.pem
|
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-kube-master-1-key.pem > admin-key.pem
|
||||||
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-kube-master-1.pem > admin.pem
|
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-kube-master-1.pem > admin.pem
|
||||||
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/ca.pem > ca.pem
|
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/ca.pem > ca.pem
|
||||||
```
|
```
|
||||||
5. Configure kubectl:
|
|
||||||
|
1. Configure kubectl:
|
||||||
|
|
||||||
```ShellSession
|
```ShellSession
|
||||||
$ kubectl config set-cluster default-cluster --server=https://[master-internal-ip]:6443 \
|
$ kubectl config set-cluster default-cluster --server=https://[master-internal-ip]:6443 \
|
||||||
--certificate-authority=ca.pem
|
--certificate-authority=ca.pem
|
||||||
|
@ -584,21 +626,24 @@ $ kubectl config set-credentials default-admin \
|
||||||
$ kubectl config set-context default-system --cluster=default-cluster --user=default-admin
|
$ kubectl config set-context default-system --cluster=default-cluster --user=default-admin
|
||||||
$ kubectl config use-context default-system
|
$ kubectl config use-context default-system
|
||||||
```
|
```
|
||||||
7. Check it:
|
|
||||||
```
|
1. Check it:
|
||||||
|
|
||||||
|
```ShellSession
|
||||||
kubectl version
|
kubectl version
|
||||||
```
|
```
|
||||||
|
|
||||||
## GlusterFS
|
## GlusterFS
|
||||||
GlusterFS is not deployed by the standard`cluster.yml` playbook, see the
|
|
||||||
|
GlusterFS is not deployed by the standard `cluster.yml` playbook, see the
|
||||||
[GlusterFS playbook documentation](../../network-storage/glusterfs/README.md)
|
[GlusterFS playbook documentation](../../network-storage/glusterfs/README.md)
|
||||||
for instructions.
|
for instructions.
|
||||||
|
|
||||||
Basically you will install Gluster as
|
Basically you will install Gluster as
|
||||||
```ShellSession
|
|
||||||
$ ansible-playbook --become -i inventory/$CLUSTER/hosts ./contrib/network-storage/glusterfs/glusterfs.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
|
```ShellSession
|
||||||
|
ansible-playbook --become -i inventory/$CLUSTER/hosts ./contrib/network-storage/glusterfs/glusterfs.yml
|
||||||
|
```
|
||||||
|
|
||||||
## What's next
|
## What's next
|
||||||
|
|
||||||
|
@ -607,6 +652,7 @@ Try out your new Kubernetes cluster with the [Hello Kubernetes service](https://
|
||||||
## Appendix
|
## Appendix
|
||||||
|
|
||||||
### Migration from `number_of_k8s_nodes*` to `k8s_nodes`
|
### Migration from `number_of_k8s_nodes*` to `k8s_nodes`
|
||||||
|
|
||||||
If you currently have a cluster defined using the `number_of_k8s_nodes*` variables and wish
|
If you currently have a cluster defined using the `number_of_k8s_nodes*` variables and wish
|
||||||
to migrate to the `k8s_nodes` style you can do it like so:
|
to migrate to the `k8s_nodes` style you can do it like so:
|
||||||
|
|
||||||
|
|
|
@ -8,6 +8,7 @@ Provision a Kubernetes cluster with [Terraform](https://www.terraform.io) on
|
||||||
This will install a Kubernetes cluster on Packet bare metal. It should work in all locations and on most server types.
|
This will install a Kubernetes cluster on Packet bare metal. It should work in all locations and on most server types.
|
||||||
|
|
||||||
## Approach
|
## Approach
|
||||||
|
|
||||||
The terraform configuration inspects variables found in
|
The terraform configuration inspects variables found in
|
||||||
[variables.tf](variables.tf) to create resources in your Packet project.
|
[variables.tf](variables.tf) to create resources in your Packet project.
|
||||||
There is a [python script](../terraform.py) that reads the generated`.tfstate`
|
There is a [python script](../terraform.py) that reads the generated`.tfstate`
|
||||||
|
@ -15,8 +16,10 @@ file to generate a dynamic inventory that is consumed by [cluster.yml](../../../
|
||||||
to actually install Kubernetes with Kubespray.
|
to actually install Kubernetes with Kubespray.
|
||||||
|
|
||||||
### Kubernetes Nodes
|
### Kubernetes Nodes
|
||||||
|
|
||||||
You can create many different kubernetes topologies by setting the number of
|
You can create many different kubernetes topologies by setting the number of
|
||||||
different classes of hosts.
|
different classes of hosts.
|
||||||
|
|
||||||
- Master nodes with etcd: `number_of_k8s_masters` variable
|
- Master nodes with etcd: `number_of_k8s_masters` variable
|
||||||
- Master nodes without etcd: `number_of_k8s_masters_no_etcd` variable
|
- Master nodes without etcd: `number_of_k8s_masters_no_etcd` variable
|
||||||
- Standalone etcd hosts: `number_of_etcd` variable
|
- Standalone etcd hosts: `number_of_etcd` variable
|
||||||
|
@ -47,6 +50,7 @@ ssh-keygen -f ~/.ssh/id_rsa
|
||||||
```
|
```
|
||||||
|
|
||||||
## Terraform
|
## Terraform
|
||||||
|
|
||||||
Terraform will be used to provision all of the Packet resources with base software as appropriate.
|
Terraform will be used to provision all of the Packet resources with base software as appropriate.
|
||||||
|
|
||||||
### Configuration
|
### Configuration
|
||||||
|
@ -56,9 +60,9 @@ Terraform will be used to provision all of the Packet resources with base softwa
|
||||||
Create an inventory directory for your cluster by copying the existing sample and linking the `hosts` script (used to build the inventory based on Terraform state):
|
Create an inventory directory for your cluster by copying the existing sample and linking the `hosts` script (used to build the inventory based on Terraform state):
|
||||||
|
|
||||||
```ShellSession
|
```ShellSession
|
||||||
$ cp -LRp contrib/terraform/packet/sample-inventory inventory/$CLUSTER
|
cp -LRp contrib/terraform/packet/sample-inventory inventory/$CLUSTER
|
||||||
$ cd inventory/$CLUSTER
|
cd inventory/$CLUSTER
|
||||||
$ ln -s ../../contrib/terraform/packet/hosts
|
ln -s ../../contrib/terraform/packet/hosts
|
||||||
```
|
```
|
||||||
|
|
||||||
This will be the base for subsequent Terraform commands.
|
This will be the base for subsequent Terraform commands.
|
||||||
|
@ -69,22 +73,23 @@ Your Packet API key must be available in the `PACKET_AUTH_TOKEN` environment var
|
||||||
This key is typically stored outside of the code repo since it is considered secret.
|
This key is typically stored outside of the code repo since it is considered secret.
|
||||||
If someone gets this key, they can startup/shutdown hosts in your project!
|
If someone gets this key, they can startup/shutdown hosts in your project!
|
||||||
|
|
||||||
For more information on how to generate an API key or find your project ID, please see:
|
For more information on how to generate an API key or find your project ID, please see
|
||||||
https://support.packet.com/kb/articles/api-integrations
|
[API Integrations](https://support.packet.com/kb/articles/api-integrations)
|
||||||
|
|
||||||
The Packet Project ID associated with the key will be set later in cluster.tfvars.
|
The Packet Project ID associated with the key will be set later in cluster.tfvars.
|
||||||
|
|
||||||
For more information about the API, please see:
|
For more information about the API, please see [Packet API](https://www.packet.com/developers/api/)
|
||||||
https://www.packet.com/developers/api/
|
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
|
||||||
```ShellSession
|
```ShellSession
|
||||||
$ export PACKET_AUTH_TOKEN="Example-API-Token"
|
export PACKET_AUTH_TOKEN="Example-API-Token"
|
||||||
```
|
```
|
||||||
|
|
||||||
Note that to deploy several clusters within the same project you need to use [terraform workspace](https://www.terraform.io/docs/state/workspaces.html#using-workspaces).
|
Note that to deploy several clusters within the same project you need to use [terraform workspace](https://www.terraform.io/docs/state/workspaces.html#using-workspaces).
|
||||||
|
|
||||||
#### Cluster variables
|
#### Cluster variables
|
||||||
|
|
||||||
The construction of the cluster is driven by values found in
|
The construction of the cluster is driven by values found in
|
||||||
[variables.tf](variables.tf).
|
[variables.tf](variables.tf).
|
||||||
|
|
||||||
|
@ -95,10 +100,11 @@ This helps when identifying which hosts are associated with each cluster.
|
||||||
|
|
||||||
While the defaults in variables.tf will successfully deploy a cluster, it is recommended to set the following values:
|
While the defaults in variables.tf will successfully deploy a cluster, it is recommended to set the following values:
|
||||||
|
|
||||||
* cluster_name = the name of the inventory directory created above as $CLUSTER
|
- cluster_name = the name of the inventory directory created above as $CLUSTER
|
||||||
* packet_project_id = the Packet Project ID associated with the Packet API token above
|
- packet_project_id = the Packet Project ID associated with the Packet API token above
|
||||||
|
|
||||||
#### Enable localhost access
|
#### Enable localhost access
|
||||||
|
|
||||||
Kubespray will pull down a Kubernetes configuration file to access this cluster by enabling the
|
Kubespray will pull down a Kubernetes configuration file to access this cluster by enabling the
|
||||||
`kubeconfig_localhost: true` in the Kubespray configuration.
|
`kubeconfig_localhost: true` in the Kubespray configuration.
|
||||||
|
|
||||||
|
@ -115,10 +121,10 @@ In the cluster's inventory folder, the following files might be created (either
|
||||||
or manually), to prevent you from pushing them accidentally they are in a
|
or manually), to prevent you from pushing them accidentally they are in a
|
||||||
`.gitignore` file in the `terraform/packet` directory :
|
`.gitignore` file in the `terraform/packet` directory :
|
||||||
|
|
||||||
* `.terraform`
|
- `.terraform`
|
||||||
* `.tfvars`
|
- `.tfvars`
|
||||||
* `.tfstate`
|
- `.tfstate`
|
||||||
* `.tfstate.backup`
|
- `.tfstate.backup`
|
||||||
|
|
||||||
You can still add them manually if you want to.
|
You can still add them manually if you want to.
|
||||||
|
|
||||||
|
@ -128,34 +134,38 @@ Before Terraform can operate on your cluster you need to install the required
|
||||||
plugins. This is accomplished as follows:
|
plugins. This is accomplished as follows:
|
||||||
|
|
||||||
```ShellSession
|
```ShellSession
|
||||||
$ cd inventory/$CLUSTER
|
cd inventory/$CLUSTER
|
||||||
$ terraform init ../../contrib/terraform/packet
|
terraform init ../../contrib/terraform/packet
|
||||||
```
|
```
|
||||||
|
|
||||||
This should finish fairly quickly telling you Terraform has successfully initialized and loaded necessary modules.
|
This should finish fairly quickly telling you Terraform has successfully initialized and loaded necessary modules.
|
||||||
|
|
||||||
### Provisioning cluster
|
### Provisioning cluster
|
||||||
|
|
||||||
You can apply the Terraform configuration to your cluster with the following command
|
You can apply the Terraform configuration to your cluster with the following command
|
||||||
issued from your cluster's inventory directory (`inventory/$CLUSTER`):
|
issued from your cluster's inventory directory (`inventory/$CLUSTER`):
|
||||||
|
|
||||||
```ShellSession
|
```ShellSession
|
||||||
$ terraform apply -var-file=cluster.tfvars ../../contrib/terraform/packet
|
terraform apply -var-file=cluster.tfvars ../../contrib/terraform/packet
|
||||||
$ export ANSIBLE_HOST_KEY_CHECKING=False
|
export ANSIBLE_HOST_KEY_CHECKING=False
|
||||||
$ ansible-playbook -i hosts ../../cluster.yml
|
ansible-playbook -i hosts ../../cluster.yml
|
||||||
```
|
```
|
||||||
|
|
||||||
### Destroying cluster
|
### Destroying cluster
|
||||||
|
|
||||||
You can destroy your new cluster with the following command issued from the cluster's inventory directory:
|
You can destroy your new cluster with the following command issued from the cluster's inventory directory:
|
||||||
|
|
||||||
```ShellSession
|
```ShellSession
|
||||||
$ terraform destroy -var-file=cluster.tfvars ../../contrib/terraform/packet
|
terraform destroy -var-file=cluster.tfvars ../../contrib/terraform/packet
|
||||||
```
|
```
|
||||||
|
|
||||||
If you've started the Ansible run, it may also be a good idea to do some manual cleanup:
|
If you've started the Ansible run, it may also be a good idea to do some manual cleanup:
|
||||||
|
|
||||||
* remove SSH keys from the destroyed cluster from your `~/.ssh/known_hosts` file
|
- Remove SSH keys from the destroyed cluster from your `~/.ssh/known_hosts` file
|
||||||
* clean up any temporary cache files: `rm /tmp/$CLUSTER-*`
|
- Clean up any temporary cache files: `rm /tmp/$CLUSTER-*`
|
||||||
|
|
||||||
### Debugging
|
### Debugging
|
||||||
|
|
||||||
You can enable debugging output from Terraform by setting `TF_LOG` to `DEBUG` before running the Terraform command.
|
You can enable debugging output from Terraform by setting `TF_LOG` to `DEBUG` before running the Terraform command.
|
||||||
|
|
||||||
## Ansible
|
## Ansible
|
||||||
|
@ -167,9 +177,9 @@ You can enable debugging output from Terraform by setting `TF_LOG` to `DEBUG` be
|
||||||
Ensure your local ssh-agent is running and your ssh key has been added. This
|
Ensure your local ssh-agent is running and your ssh key has been added. This
|
||||||
step is required by the terraform provisioner:
|
step is required by the terraform provisioner:
|
||||||
|
|
||||||
```
|
```ShellSession
|
||||||
$ eval $(ssh-agent -s)
|
eval $(ssh-agent -s)
|
||||||
$ ssh-add ~/.ssh/id_rsa
|
ssh-add ~/.ssh/id_rsa
|
||||||
```
|
```
|
||||||
|
|
||||||
If you have deployed and destroyed a previous iteration of your cluster, you will need to clear out any stale keys from your SSH "known hosts" file ( `~/.ssh/known_hosts`).
|
If you have deployed and destroyed a previous iteration of your cluster, you will need to clear out any stale keys from your SSH "known hosts" file ( `~/.ssh/known_hosts`).
|
||||||
|
@ -178,7 +188,7 @@ If you have deployed and destroyed a previous iteration of your cluster, you wil
|
||||||
|
|
||||||
Make sure you can connect to the hosts. Note that Flatcar Container Linux by Kinvolk will have a state `FAILED` due to Python not being present. This is okay, because Python will be installed during bootstrapping, so long as the hosts are not `UNREACHABLE`.
|
Make sure you can connect to the hosts. Note that Flatcar Container Linux by Kinvolk will have a state `FAILED` due to Python not being present. This is okay, because Python will be installed during bootstrapping, so long as the hosts are not `UNREACHABLE`.
|
||||||
|
|
||||||
```
|
```ShellSession
|
||||||
$ ansible -i inventory/$CLUSTER/hosts -m ping all
|
$ ansible -i inventory/$CLUSTER/hosts -m ping all
|
||||||
example-k8s_node-1 | SUCCESS => {
|
example-k8s_node-1 | SUCCESS => {
|
||||||
"changed": false,
|
"changed": false,
|
||||||
|
@ -198,8 +208,8 @@ If it fails try to connect manually via SSH. It could be something as simple as
|
||||||
|
|
||||||
### Deploy Kubernetes
|
### Deploy Kubernetes
|
||||||
|
|
||||||
```
|
```ShellSession
|
||||||
$ ansible-playbook --become -i inventory/$CLUSTER/hosts cluster.yml
|
ansible-playbook --become -i inventory/$CLUSTER/hosts cluster.yml
|
||||||
```
|
```
|
||||||
|
|
||||||
This will take some time as there are many tasks to run.
|
This will take some time as there are many tasks to run.
|
||||||
|
@ -208,20 +218,22 @@ This will take some time as there are many tasks to run.
|
||||||
|
|
||||||
### Set up kubectl
|
### Set up kubectl
|
||||||
|
|
||||||
* [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on the localhost.
|
- [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on the localhost.
|
||||||
|
- Verify that Kubectl runs correctly
|
||||||
|
|
||||||
* Verify that Kubectl runs correctly
|
```ShellSession
|
||||||
```
|
|
||||||
kubectl version
|
kubectl version
|
||||||
```
|
```
|
||||||
|
|
||||||
* Verify that the Kubernetes configuration file has been copied over
|
- Verify that the Kubernetes configuration file has been copied over
|
||||||
```
|
|
||||||
|
```ShellSession
|
||||||
cat inventory/alpha/$CLUSTER/admin.conf
|
cat inventory/alpha/$CLUSTER/admin.conf
|
||||||
```
|
```
|
||||||
|
|
||||||
* Verify that all the nodes are running correctly.
|
- Verify that all the nodes are running correctly.
|
||||||
```
|
|
||||||
|
```ShellSession
|
||||||
kubectl version
|
kubectl version
|
||||||
kubectl --kubeconfig=inventory/$CLUSTER/artifacts/admin.conf get nodes
|
kubectl --kubeconfig=inventory/$CLUSTER/artifacts/admin.conf get nodes
|
||||||
```
|
```
|
||||||
|
|
Loading…
Reference in a new issue