*`machines`: Machines to provision. Key of this object will be used as the name of the machine
*`node_type`: The role of this node *(master|worker)*
*`size`: The size to use
*`boot_disk`: The boot disk to use
*`image_name`: Name of the image
*`root_partition_size`: Size *(in GB)* for the root partition
*`ceph_partition_size`: Size *(in GB)* for the partition for rook to use as ceph storage. *(Set to 0 to disable)*
*`node_local_partition_size`: Size *(in GB)* for the partition for node-local-storage. *(Set to 0 to disable)*
*`ssh_whitelist`: List of IP ranges (CIDR) that will be allowed to ssh to the nodes
*`api_server_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the API server
*`nodeport_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the kubernetes nodes on port 30000-32767 (kubernetes nodeports)
### Optional
*`prefix`: Prefix to use for all resources, required to be unique for all clusters in the same project *(Defaults to `default`)*
An example variables file can be found `default.tfvars`
## Known limitations
### Only single disk
Since Exoscale doesn't support additional disks to be mounted onto an instance, this script has the ability to create partitions for [Rook](https://rook.io/) and [node-local-storage](https://kubernetes.io/docs/concepts/storage/volumes/#local).
### No Kubernetes API
The current solution doesn't use the [Exoscale Kubernetes cloud controller](https://github.com/exoscale/exoscale-cloud-controller-manager).
This means that we need to set up a HTTP(S) loadbalancer in front of all workers and set the Ingress controller to DaemonSet mode.