3.4 KiB
Kubernetes on Hetzner with Terraform
Provision a Kubernetes cluster on Hetzner using Terraform and Kubespray
Overview
The setup looks like following
Kubernetes cluster
+--------------------------+
| +--------------+ |
| | +--------------+ |
| --> | | | |
| | | Master/etcd | |
| | | node(s) | |
| +-+ | |
| +--------------+ |
| ^ |
| | |
| v |
| +--------------+ |
| | +--------------+ |
| --> | | | |
| | | Worker | |
| | | node(s) | |
| +-+ | |
| +--------------+ |
+--------------------------+
The nodes uses a private network for node to node communication and a public interface for all external communication.
Requirements
- Terraform 0.14.0 or newer
Quickstart
NOTE: Assumes you are at the root of the kubespray repo.
For authentication in your cluster you can use the environment variables.
export HCLOUD_TOKEN=api-token
Copy the cluster configuration file.
CLUSTER=my-hetzner-cluster
cp -r inventory/sample inventory/$CLUSTER
cp contrib/terraform/hetzner/default.tfvars inventory/$CLUSTER/
cd inventory/$CLUSTER
Edit default.tfvars
to match your requirement.
Run Terraform to create the infrastructure.
terraform -chdir=../../contrib/terraform/hetzner init
terraform apply --var-file default.tfvars ../../contrib/terraform/hetzner/
You should now have a inventory file named inventory.ini
that you can use with kubespray.
You can use the inventory file with kubespray to set up a cluster.
It is a good idea to check that you have basic SSH connectivity to the nodes. You can do that by:
ansible -i inventory.ini -m ping all
You can setup Kubernetes with kubespray using the generated inventory:
ansible-playbook -i inventory.ini ../../cluster.yml -b -v
Cloud controller
For better support with the cloud you can install the hcloud cloud controller and CSI driver.
Please read the instructions in both repos on how to install it.
Teardown
You can teardown your infrastructure using the following Terraform command:
terraform destroy --var-file default.tfvars ../../contrib/terraform/hetzner
Variables
prefix
: Prefix to add to all resources, if set to "" don't set any prefixssh_public_keys
: List of public SSH keys to install on all machineszone
: The zone where to run the clusternetwork_zone
: the network zone where the cluster is runningmachines
: Machines to provision. Key of this object will be used as the name of the machinenode_type
: The role of this node (master|worker)size
: Size of the VMimage
: The image to use for the VM
ssh_whitelist
: List of IP ranges (CIDR) that will be allowed to ssh to the nodesapi_server_whitelist
: List of IP ranges (CIDR) that will be allowed to connect to the API servernodeport_whitelist
: List of IP ranges (CIDR) that will be allowed to connect to the kubernetes nodes on port 30000-32767 (kubernetes nodeports)ingress_whitelist
: List of IP ranges (CIDR) that will be allowed to connect to kubernetes workers on port 80 and 443