Due to lack of requirements installation on Azure README, the error
can happen:
"The ipaddr filter requires python's netaddr be installed on the
ansible controller"
It is nice to add the installation for Azure users.
apply-rg.sh was for Azure command version 1("azure" command) and the
command is old and version 2("az" command) is officially used today.
apply-rg_2.sh was for the version 2. In addition, the README[1] says
we need to run apply-rg.sh for applying templates.
This renames apply-rg_2.sh to apply-rg.sh for common usages of the
version 2.
[1]: https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/azurerm#generating-and-applying
The ansible-playbook needs to ssh-login to Azure virtual machines with
ssh keypair, and users need to specify ssh_public_keys for their own
ssh public key. The change of ssh_public_keys is mandatory.
So this updates contrib/azurerm/README.md to explain that.
In addition, the path of all.yml was wrong. That also is updated with
this.
apply-rg_2.sh uses 'az group deployment' command but the command is
deprecated like the following warning message:
"This command is implicitly deprecated because command group
'group deployment' is deprecated and will be removed in a future release.
Use 'deployment group' instead."
This updates these deprecated commands.
FYI: The command has been deprecated since [1] on azure-cli side.
[1]: 991cb7cc7c (diff-2057bbb8441166e4910b34b09d22b58cR222)
Before this commit, the bastion entry in the inventary was not honored,
so machines behind firewalls or with unrouted addresses were not
reachable for ansible.
* add support for nova servergroups
* Add documentation for openstack nova servergroups
* uppdate to TF 0.12.12 format and fix etcd
* revert for_each change
* fix variables and formatting in main.tf
* try to avoid errors
* update variable
* Update main.tf
* Update main.tf
* update all other instance resources
* Update parsing of terraform state file for 0.12.12
* Resource does not seem to have a module element but instead has
provider
* Return the boolean right way if it is already a bool since a bool does
not have an lower method
* Remove the setting of ansible_ssh_user to root for all Packet
Not all servers in packet are accessed as root by default. CoreOS
systems use the `core` user. Removing this allows the user to specify
the remote user with an extra_var or in an ansible.cfg file.
* Default to root user for packet devices except on CoreOS
* Update TF_VERSION for packet in tf-validate-packet
Update TV_VERSION to 0.12.12 for gitlab-ci tf-validate-packet tests
* convert packet terraform files to TV_VERSION 4
* initalize terraform before copying the variable file to the top level dir
Cleaned up deprecated APIs:
apps/v1beta1
apps/v1beta2
extensions/v1beta1 for ds,deploy,rs
Add workaround for deploying helm using incompatible
deployment manifest.
Change-Id: I78b36741348f47a999df3841ee63cf4e6f377830
* update openstack to terraform 0.12(.5)
* replace cluter.tf with cluster.tfvars
* update README.md to terraform 0.12
* update Openstack CI tests to use terraform 0.12
* specify terraform version in openstack README
* gitlab CI to copy cluster.tfvars in case of openstack provider
* The terraform/openstack dynamic inventory can read
tfstate v4 (generated by terraform 0.12) and convert them internally
ro v3 (as generated by terraform 0.11.x).
Additionally the script has been updated to Python 3.
* lvm packages removal during tear down skipped by default
* lvm utils execution PATH fixed for CentOS/RH
* Heketi updated to the latest version 9
Signed-off-by: Vitaliy Dmitriev <vi7alya@gmail.com>
* Add k8s_allowed_remote_ips variable
Useful for defining CIDRs allowed to initiate a SSH connection when
you don't want to use a bastion.
* Add TF_VAR_k8s_allowed_remote_ips variable to tf-apply-ovh
* Run terraform fmt
* Add terraform fmt to .terraform-validate CI step
* Add tf-validate-aws CI step
* Revert "Add tf-validate-aws CI step"
This reverts commit e007225fac.
* Lint everything in the repository with yamllint
* yamllint fixes: syntax fixes only
* yamllint fixes: move comments to play names
* yamllint fixes: indent comments in .gitlab-ci.yml file
* add to inventory.py script ability to indicate ip ranges
* add test for range2ip function for inventory.py script
some fixes
* add negative test for range2ip function for inventory.py script
* Add support for Packet with Terraform
Co-Author: johnstudarus <john@jhlconsulting.com>
* removed advanced features to streamline
* clarifying usage
* Update README.md
provide a better test to validate things are working OK
* Update README.md
clarifying what to set
* minor wordsmithing
* Fix admin cert path
* clarifying how to configure keys
* enabling kubeconfig_localhost
pull over the configuration file via playbooks rather than the key files individually
* Create output.tf
* Add support for node specific plans
* Remove non-kubeadm deployment
* More cleanup
* More cleanup
* More cleanup
* More cleanup
* Fix gitlab
* Try stop gce first before absent to make the delete process work
* More cleanup
* Fix bug with checking if kubeadm has already run
* Fix bug with checking if kubeadm has already run
* More fixes
* Fix test
* fix
* Fix gitlab checkout untill kubespray 2.8 is on quay
* Fixed
* Add upgrade path from non-kubeadm to kubeadm. Revert ssl path
* Readd secret checking
* Do gitlab checks from v2.7.0 test upgrade path to 2.8.0
* fix typo
* Fix CI jobs to kubeadm again. Fix broken hyperkube path
* Fix gitlab
* Fix rotate tokens
* More fixes
* More fixes
* Fix tokens
Now the `kubespray-aws-inventory.py` script always set a node_labels key
to ansible_host.
When AWS instance did not set property labels, it would be an empty
string.
The TASK `Write kubelet config file (kubeadm or non-kubeadm)` will
failed with a msg:
`AnsibleUndefinedVariable: 'unicode object' has no attribute 'items'`.
* [contrib/terraform/openstack] Add worker_allowed_ports
Allow user to define in terraform template which ports and remote
IPs that are allowed to access worker nodes. This is useful when you
don't want to open up whole NodePort range to the outside world, or
ports outside NodePort range.
* Add an 'access_ip' for openstack resources to the terraform inventory builder script
* Update Openstack README
* Only use ipv4
* If there's a floating IP assigned to an openstack instance, use that for access_ip
* failed
* version_compare
* succeeded
* skipped
* success
* version_compare becomes version since ansible 2.5
* ansible minimal version updated in doc and spec
* last version_compare
* [jjo] add DIND support to contrib/
- add contrib/dind with ansible playbook to
create "node" containers, and setup them to mimic
host nodes as much as possible (using Ubuntu images),
see contrib/dind/README.md
- nodes' /etc/hosts editing via `blockinfile` and
`lineinfile` need `unsafe_writes: yes` because /etc/hosts
are mounted by docker, and thus can't be handled atomically
(modify copy + rename)
* dind-host role: set node container hostname on creation
* add "Resulting deployment" section with some CLI outputs
* typo
* selectable node_distro: debian, ubuntu
* some fixes for node_distro: ubuntu
* cpu optimization: add early `pkill -STOP agetty`
* typo
* add centos dind support ;)
* add kubespray-dind.yaml, support fedora
- add kubespray-dind.yaml (former custom.yaml at README.md)
- rework README.md as per above
- use some YAML power to share distros' commonality
- add fedora support
* create unique /etc/machine-id and other updates
- create unique /etc/machine-id in each docker node,
used as seed for e.g. weave mac addresses
- with above, now netchecker 100% passes WoHooOO!
🎉🎉🎉
- updated README.md output from (1.12.1, verified
netcheck)
* minor typos
* fix centos node creation, needs earlier udevadm removal to avoid flaky facts, also verified netcheck Ok \o/
* add Q&D test-distros.sh, back to manual /etc/machine-id hack
* run-test-distros.sh cosmetics and minor fixes
* run-test-distros.sh: $rc fix and minor formatting changes
* run-test-distros.sh output cosmetics
* Replace `openstack_compute_secgroup_v2` with `openstack_networking_secgroup_v2`
The `openstack_networking_secgroup_v2` resource allow specifications of
both ingress and egress. Nova security groups define ingress rules only.
This change will also allow for more user-friendly specified security
rules, as the different security group resources have different HCL
syntax.