* OCI subnet AD 2 is not required for CCM >= 0.7.0
Reorganize OCI provider to generate configuration, rather than pull
Add pull secret option to OCI cloud provider
* Updated oci example to document new parameters
The inventory/mycluster directory gets created when someone follows
the instructions in README.md, but it should never be committed to
the kubespray repo. Ignore it.
* add to inventory.py script ability to indicate ip ranges
* add test for range2ip function for inventory.py script
some fixes
* add negative test for range2ip function for inventory.py script
Run only commands that apply to the current deployed cluster (only get
calico info and skip weave/flannel when deploying calico, for example).
Add helm release info if helm is deployed
This PR ensures that the e2fsprogs and xfsprogs packages are
installed on all Kubernetes nodes and that the packages are
the latest versions. It also ensures that the nodes can
create XFS filesystems when necessary, since not all distros
install xfsprogs by default.
e2fsprogs - ext2/ext3/ext4 file system utilities
xfsprogs - Utilities for managing the XFS filesystem
This fixes the issue where if there was a hosts.ini file present in the
inventory directory, then Vagrant would set an incorrect path as
ansible.inventory_path
* Calico: Ability to define the default IPPool CIDR (instead of kube_pods_subnet)
* Documentation for calico_pool_cidr (and calico_advertise_cluster_ips which has been forgotten...)
* Add support for Packet with Terraform
Co-Author: johnstudarus <john@jhlconsulting.com>
* removed advanced features to streamline
* clarifying usage
* Update README.md
provide a better test to validate things are working OK
* Update README.md
clarifying what to set
* minor wordsmithing
* Fix admin cert path
* clarifying how to configure keys
* enabling kubeconfig_localhost
pull over the configuration file via playbooks rather than the key files individually
* Create output.tf
* Add support for node specific plans
* Set cluster DNS correctly in case of nodelocal dns cache
* Pass in cluster_ip based on dns mode
* Disable nodelocaldns by default
* Fix syntax error
* Fix syntax issue
* Add nodelocadns ip to vars of node installation
* Change location of nodelocaldns_ip
* Try to remove newlines from jinja template
* Add debug for config file
* Move parameter logic outside of template
* Adapt templates after feedback
* Remove debugging
Addressing the discussion started in #4064, this PR moves kubeadm and
hyperkube binaries to /usr/local/bin before running them on the master
nodes.
It is to address the case where local_release_dir points to /tmp
(kubespray default) and /tmp is mounted with noexec mode, preventing
any binaries to be run in that partition.
In role "node", we still move kubeadm to bin_dir only on the worker
nodes.
I know this is a bit hack.
If you use cloud LB, you can use kubeadm's controlPlaneEndpoint to configure kube-proxy's server field.
But for nginx-proxy, it didn't start when kubeadm init.
* Fix random failure in debug: var=result.content|from_json
* netchecker agents are deployed on all k8s-cluster group members
* reducing limits/requests is not enough, switching to n1-standard-2
* gce_centos7 need more cpu
Looks like `epel_enabled` was not configured for the epel install in `bootstrap-centos.yml`. Also, there were no conditionals that would trigger bootstrap for RHEL.