c12s-kubespray/contrib/terraform/exoscale
Ayoub Ed-dafali 4cd949c7e1
Add missing zone input variable - Exoscale (#9495)
* Add missing zone input variable

* Fix terraform formatting
2022-11-24 16:30:04 -08:00
..
modules/kubernetes-cluster exoscale: Rework EIP access from workers (#7337) 2021-04-16 03:22:22 -07:00
sample-inventory contrib/terraform/exoscale: Rework SSH public keys (#7242) 2021-02-03 07:32:28 -08:00
templates Rename ansible groups to use _ instead of - (#7552) 2021-04-29 05:20:50 -07:00
default.tfvars Update default exoscale master with more RAM (#7328) 2021-03-01 09:41:25 -08:00
main.tf Add missing zone input variable - Exoscale (#9495) 2022-11-24 16:30:04 -08:00
output.tf Added terraform support for Exoscale (#7141) 2021-01-22 20:37:39 -08:00
README.md add pre-commit hook to facilitate local testing (#9158) 2022-08-24 06:54:03 -07:00
variables.tf contrib/terraform/exoscale: Rework SSH public keys (#7242) 2021-02-03 07:32:28 -08:00
versions.tf contrib/terraform/exoscale: Rework SSH public keys (#7242) 2021-02-03 07:32:28 -08:00

Kubernetes on Exoscale with Terraform

Provision a Kubernetes cluster on Exoscale using Terraform and Kubespray

Overview

The setup looks like following

                           Kubernetes cluster
                        +-----------------------+
+---------------+       |   +--------------+    |
|               |       |   | +--------------+  |
| API server LB +---------> | |              |  |
|               |       |   | | Master/etcd  |  |
+---------------+       |   | | node(s)      |  |
                        |   +-+              |  |
                        |     +--------------+  |
                        |           ^           |
                        |           |           |
                        |           v           |
+---------------+       |   +--------------+    |
|               |       |   | +--------------+  |
|  Ingress LB   +---------> | |              |  |
|               |       |   | |    Worker    |  |
+---------------+       |   | |    node(s)   |  |
                        |   +-+              |  |
                        |     +--------------+  |
                        +-----------------------+

Requirements

  • Terraform 0.13.0 or newer (0.12 also works if you modify the provider block to include version and remove all versions.tf files)

Quickstart

NOTE: Assumes you are at the root of the kubespray repo

Copy the sample inventory for your cluster and copy the default terraform variables.

CLUSTER=my-exoscale-cluster
cp -r inventory/sample inventory/$CLUSTER
cp contrib/terraform/exoscale/default.tfvars inventory/$CLUSTER/
cd inventory/$CLUSTER

Edit default.tfvars to match your setup. You MUST, at the very least, change ssh_public_keys.

# Ensure $EDITOR points to your favorite editor, e.g., vim, emacs, VS Code, etc.
$EDITOR default.tfvars

For authentication you can use the credentials file ~/.cloudstack.ini or ./cloudstack.ini. The file should look like something like this:

[cloudstack]
key = <API key>
secret = <API secret>

Follow the Exoscale IAM Quick-start to learn how to generate API keys.

Encrypted credentials

To have the credentials encrypted at rest, you can use sops and only decrypt the credentials at runtime.

cat << EOF > cloudstack.ini
[cloudstack]
key =
secret =
EOF
sops --encrypt --in-place --pgp <PGP key fingerprint> cloudstack.ini
sops cloudstack.ini

Run terraform to create the infrastructure

terraform init ../../contrib/terraform/exoscale
terraform apply -var-file default.tfvars ../../contrib/terraform/exoscale

If your cloudstack credentials file is encrypted using sops, run the following:

terraform init ../../contrib/terraform/exoscale
sops exec-file -no-fifo cloudstack.ini 'CLOUDSTACK_CONFIG={} terraform apply -var-file default.tfvars ../../contrib/terraform/exoscale'

You should now have a inventory file named inventory.ini that you can use with kubespray. You can now copy your inventory file and use it with kubespray to set up a cluster. You can type terraform output to find out the IP addresses of the nodes, as well as control-plane and data-plane load-balancer.

It is a good idea to check that you have basic SSH connectivity to the nodes. You can do that by:

ansible -i inventory.ini -m ping all

Example to use this with the default sample inventory:

ansible-playbook -i inventory.ini ../../cluster.yml -b -v

Teardown

The Kubernetes cluster cannot create any load-balancers or disks, hence, teardown is as simple as Terraform destroy:

terraform destroy -var-file default.tfvars ../../contrib/terraform/exoscale

Variables

Required

  • ssh_public_keys: List of public SSH keys to install on all machines
  • zone: The zone where to run the cluster
  • machines: Machines to provision. Key of this object will be used as the name of the machine
    • node_type: The role of this node (master|worker)
    • size: The size to use
    • boot_disk: The boot disk to use
      • image_name: Name of the image
      • root_partition_size: Size (in GB) for the root partition
      • ceph_partition_size: Size (in GB) for the partition for rook to use as ceph storage. (Set to 0 to disable)
      • node_local_partition_size: Size (in GB) for the partition for node-local-storage. (Set to 0 to disable)
  • ssh_whitelist: List of IP ranges (CIDR) that will be allowed to ssh to the nodes
  • api_server_whitelist: List of IP ranges (CIDR) that will be allowed to connect to the API server
  • nodeport_whitelist: List of IP ranges (CIDR) that will be allowed to connect to the kubernetes nodes on port 30000-32767 (kubernetes nodeports)

Optional

  • prefix: Prefix to use for all resources, required to be unique for all clusters in the same project (Defaults to default)

An example variables file can be found default.tfvars

Known limitations

Only single disk

Since Exoscale doesn't support additional disks to be mounted onto an instance, this script has the ability to create partitions for Rook and node-local-storage.

No Kubernetes API

The current solution doesn't use the Exoscale Kubernetes cloud controller. This means that we need to set up a HTTP(S) loadbalancer in front of all workers and set the Ingress controller to DaemonSet mode.