* contrib/terraform/exoscale: Rework SSH public keys
Exoscale has a few limitations with `exoscale_ssh_keypair` resources.
Creating several clusters with these scripts may lead to an error like:
```
Error: API error ParamError 431 (InvalidParameterValueException 4350): The key pair "lj-sc-ssh-key" already has this fingerprint
```
This patch reworks handling of SSH public keys. Specifically, we rely on
the more cloud-agnostic way of configuring SSH public keys via
`cloud-init`.
* contrib/terraform/exoscale: terraform fmt
* contrib/terraform/exoscale: Add terraform validate
* contrib/terraform/exoscale: Inline public SSH keys
The Terraform scripts need to install some SSH key, so that Kubespray
(i.e., the "Ansible part") can take over. Initially, we pointed the
Terraform scripts to `~/.ssh/id_rsa.pub`. This proved to be suboptimal:
Operators sharing responbility for a cluster risk unnecessarily replacing resources.
Therefore, it has been determined that it's best to inline the public
SSH keys. The chosen variable `ssh_public_keys` provides some uniformity
with `contrib/azurerm`.
* Fix Terraform Exoscale test
* Fix Terraform 0.14 test
* [terraform/aws] Fix Terraform >=0.13 warnings
Terraform >=0.13 gives the following warning:
```
Warning: Interpolation-only expressions are deprecated
```
The fix was tested as follows:
```
rm -rf .terraform && terraform0.12.26 init && terraform0.12.26 validate
rm -rf .terraform && terraform0.13.5 init && terraform0.13.5 validate
rm -rf .terraform && terraform0.14.3 init && terraform0.14.3 validate
```
which gave no errors nor warnings.
* [terraform/openstack] Fixes for Terraform >=0.13
Terraform >=0.13 gives the following error:
```
Error: Failed to install providers
Could not find required providers, but found possible alternatives:
hashicorp/openstack -> terraform-provider-openstack/openstack
```
This patch fixes these errors.
This fix was tested as follows:
```
rm -rf .terraform && terraform0.12.26 init && terraform0.12.26 validate
rm -rf .terraform && terraform0.13.5 init && terraform0.13.5 validate
rm -rf .terraform && terraform0.14.3 init && terraform0.14.3 validate
```
which gave no errors nor warnings for Terraform 0.13.5 and Terraform
0.14.3. Unfortunately, 0.12.x gives a harmless warning, but
with 0.14.3 out the door, I guess we need to move on.
* [terraform/packet] Fixes for Terraform >=0.13
This fix was tested as follows:
```
export PACKET_AUTH_TOKEN=blah-blah
rm -rf .terraform && terraform0.12.26 init && terraform0.12.26 validate
rm -rf .terraform && terraform0.13.5 init && terraform0.13.5 validate
rm -rf .terraform && terraform0.14.3 init && terraform0.14.3 validate
```
Errors are gone, but warnings still remain. It is impossible to please
all three versions of Terraform.
* Add tests for Terraform >=0.13
* Add note about changing private IP in admin.conf.
When I run kubespray, a load balancer is created which should be used instead of the ip of the controller node.
* Procedure to find load balancer and update admin.conf
When I run kubespray, a load balancer is used instead of the private ip of the controller.
I kept seeing `TLS handshake error from 10.250.250.158:63770: EOF` from two IP addresses that correlate to my ELB. Changing the health check from TCP to HTTPS stopped the errors from being generated.
It was documented as if it were an Ansible variable, but it is a Terraform variable.
This also means the colon syntax was incorrect. TF variables are assigned with an equals sign.
Co-authored-by: rptaylor <rptaylor@uvic.ca>
* add support for nova servergroups
* Add documentation for openstack nova servergroups
* uppdate to TF 0.12.12 format and fix etcd
* revert for_each change
* fix variables and formatting in main.tf
* try to avoid errors
* update variable
* Update main.tf
* Update main.tf
* update all other instance resources
* Update parsing of terraform state file for 0.12.12
* Resource does not seem to have a module element but instead has
provider
* Return the boolean right way if it is already a bool since a bool does
not have an lower method
* Remove the setting of ansible_ssh_user to root for all Packet
Not all servers in packet are accessed as root by default. CoreOS
systems use the `core` user. Removing this allows the user to specify
the remote user with an extra_var or in an ansible.cfg file.
* Default to root user for packet devices except on CoreOS
* Update TF_VERSION for packet in tf-validate-packet
Update TV_VERSION to 0.12.12 for gitlab-ci tf-validate-packet tests
* convert packet terraform files to TV_VERSION 4
* initalize terraform before copying the variable file to the top level dir
* update openstack to terraform 0.12(.5)
* replace cluter.tf with cluster.tfvars
* update README.md to terraform 0.12
* update Openstack CI tests to use terraform 0.12
* specify terraform version in openstack README
* gitlab CI to copy cluster.tfvars in case of openstack provider
* The terraform/openstack dynamic inventory can read
tfstate v4 (generated by terraform 0.12) and convert them internally
ro v3 (as generated by terraform 0.11.x).
Additionally the script has been updated to Python 3.
* Add k8s_allowed_remote_ips variable
Useful for defining CIDRs allowed to initiate a SSH connection when
you don't want to use a bastion.
* Add TF_VAR_k8s_allowed_remote_ips variable to tf-apply-ovh
* Run terraform fmt
* Add terraform fmt to .terraform-validate CI step
* Add tf-validate-aws CI step
* Revert "Add tf-validate-aws CI step"
This reverts commit e007225fac.
* Add support for Packet with Terraform
Co-Author: johnstudarus <john@jhlconsulting.com>
* removed advanced features to streamline
* clarifying usage
* Update README.md
provide a better test to validate things are working OK
* Update README.md
clarifying what to set
* minor wordsmithing
* Fix admin cert path
* clarifying how to configure keys
* enabling kubeconfig_localhost
pull over the configuration file via playbooks rather than the key files individually
* Create output.tf
* Add support for node specific plans
* [contrib/terraform/openstack] Add worker_allowed_ports
Allow user to define in terraform template which ports and remote
IPs that are allowed to access worker nodes. This is useful when you
don't want to open up whole NodePort range to the outside world, or
ports outside NodePort range.
* Add an 'access_ip' for openstack resources to the terraform inventory builder script
* Update Openstack README
* Only use ipv4
* If there's a floating IP assigned to an openstack instance, use that for access_ip
* Replace `openstack_compute_secgroup_v2` with `openstack_networking_secgroup_v2`
The `openstack_networking_secgroup_v2` resource allow specifications of
both ingress and egress. Nova security groups define ingress rules only.
This change will also allow for more user-friendly specified security
rules, as the different security group resources have different HCL
syntax.
Attempting to clarify the language surrounding the etcd node deployment script failure mechanism. I had this error when doing a new cluster deployment last night and, though it should have been, it wasn't immediately apparent to me what was causing the issue (since my default master node hostnames do not specify whether they are also acting as etcd replicas).
* Add supplementary node groups
To add additional ansible groups to the k8s nodes, such as
`kube-ingress` for running ingress controller pods. Empty by default.