* Update parsing of terraform state file for 0.12.12
* Resource does not seem to have a module element but instead has
provider
* Return the boolean right way if it is already a bool since a bool does
not have an lower method
* Remove the setting of ansible_ssh_user to root for all Packet
Not all servers in packet are accessed as root by default. CoreOS
systems use the `core` user. Removing this allows the user to specify
the remote user with an extra_var or in an ansible.cfg file.
* Default to root user for packet devices except on CoreOS
* Update TF_VERSION for packet in tf-validate-packet
Update TV_VERSION to 0.12.12 for gitlab-ci tf-validate-packet tests
* convert packet terraform files to TV_VERSION 4
* initalize terraform before copying the variable file to the top level dir
Kubespray Pull Request #5084 (https://github.com/kubernetes-sigs/kubespray/pull/5084) caused more problems than it solved due to limitations with the synchronize module. See comments on Kubespray Issues #5059 (https://github.com/kubernetes-sigs/kubespray/issues/5059) and #5116 (https://github.com/kubernetes-sigs/kubespray/issues/5116). Details from Ansible documentation: "Currently, synchronize is limited to elevating permissions via passwordless sudo. This is because rsync itself is connecting to the remote machine and rsync doesn’t give us a way to pass sudo credentials in. ... Currently there are only a few connection types which support synchronize (ssh, paramiko, local, and docker) because a sync strategy has been determined for those connection types. Note that the connection for these must not need a password as rsync itself is making the connection and rsync does not provide us a way to pass a password to the connection. ..." Thus, reverting Pull Request #5084.
* Add support for Kubernetes 1.16.1
* Defaults to 1.16.1
* add 1.16.2 checksums and set new version as default
* correct 1.16.2 checksums and add 1.15.5 checksums
Since it is unsupported to skip upgrades, I've detailed the steps for upgrading a step at a time and removed some language that indicated it should work
When using cluster.yml or scale.yml to add/scale nodes in the existing
k8s cluster, the `kubeadm init` wouldn't run. As a result, kube-proxy
wouldn't be created, and therefore the kube-proxy deletion task would
fail, e.g. in the case where kube-router is used and "kube_proxy_remove"
is set to true. As a workaround, add ignore_errors to the kube-proxy
deletion task.
It's unnecessary and breaks when running from within a docker container:
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TimeoutError: Timer expired after 10 seconds
fatal: [localhost]: FAILED! => {"changed": false, "cmd": "/usr/sbin/udevadm info --query property --name /dev/mapper/vg00-root", "msg": "Timer expired after 10 seconds", "rc": 257}
```