add the support of the folling property in azure-credential-check.yml
- azure_loadbalancer_sku: Sku of Load Balancer and Public IP. Candidate values are: basic and standard.
- azure_exclude_master_from_standard_lb: excludes master nodes from standard load balancer.
- azure_disable_outbound_snat: disables the outbound SNAT for public load balancer rules
- useInstanceMetadata: Use instance metadata service where possible
- azure_primary_availability_set: (Optional) The name of the availability set that should be used as the load balancer backend
We don't need to support upgrades from 2 year old installs,
just from the last major version.
Also changed most retried tasks to 1s delay instead of longer.
* Add README to bootstrap-os role
* Rework bootstrap-os once more
* Document workarounds for bugs/deficiencies in Ansible modules
* Unify and document role variables
* Remove installation of additional packages and repositories
* Merge Ubuntu and Debian tasks
* Remove pipelining setting from default playbooks
* Fix OpenSUSE not running its required tasks
The docker service provided by the containers-basic bundle is masked
in ClearLinux distribution. This is causing errors in the following
steps. This commit ensures that the unit is not masked.
* Use K8s 1.14 and add kubeadm experimental control plane mode
This reverts commit d39c273d96.
* Cleanup kubeadm setup run on first master
* pin kubeadm_certificate_key in test
* Remove kubelet autolabel of kube-node, add symlink for pki dir
Change-Id: Id5e74dd667c60675dbfe4193b0bc9fb44380e1ca
The BINDIR variable defined on the runc's Makefile[1] defines
installation path is on $(PREFIX)/sbin which used for most of the
Linux distributions. This change fixes the absolute path used for
non-ClearLinux distributions (CentOS, Ubuntu).
[1] https://github.com/opencontainers/runc/blob/master/Makefile#L10
The Stateless ClearLinux feature[1] requires the creation of folders
in /etc folder. This change ensure the existence of the
/etc/bash_completion.d/ folder for ClearLinux Distribution.
[1] https://clearlinux.org/features/stateless
* Enable nodelocaldns by default
* Enable nodelocaldns by default
* nodelocaldns is now default
* Disable enable_nodelocaldns for the addons CI jobs
Disable enable_nodelocaldns for the addons CI jobs to make sure things still work without nodelocaldns
* Add ansible-lint as gitlab-ci step
* Fix jinja2 syntax in include_tasks that breaks ansible-lint
* Use a block scalar to get around gitlab quoting/escaping rules
* Run ansible-lint in verbose mode in CI
This will fix error: error converting YAML to JSON: yaml: line 36: mapping values are not allowed in this context
Signed-off-by: Abdulaziz AlMalki <almalki.a@gmail.com>
* Vagrantfile: Bump openSUSE to Leap 15.0
* roles: container-engine: Add 'containerd' package for openSUSE
The 'containerd' package contains the docker-containerd and
docker-containerd-shim binaries. We also need to ensure that the latest
version is installed since an older version may already be present (eg GCE
images)
* Remove docker log-opts for opensuse
* roles: bootstrap-os: Use lowercase 'o' for openSUSE
OpenSUSE is not a valid family name. The correct one is openSUSE
* roles: bootstrap-os: Update zypper cache before first installation
The zypper cache may be outdated so ensure that it's fully updated
before we try and install the bootstrap packages.
Both the `yum` and `apt` modules support a list as input, this allows us avoid the slower `with_items` approach, which can take a long time with a large count of cluster nodes.
Both kubedns and dnsmasq modes are long not maintained.
We should run dns_late steps at the end because sshd
makes DNS lookups during Ansible run and has 2s timeouts
for each failed lookup trying to connect to coredns before
it is ready.
values from inventory in roles/kubespray-defaults/defaults/main.yml
hardcoded values in roles/container-engine/defaults/main.yml
dns_servers set empty in roles/container-engine/defaults/main.yml and skydns_server not set in docker_dns_servers variables
also set default value for manual_dns_serve
another variables in roles/container-engine/defaults not need to set
* feat(external-provisioner/local-path-provisioner): adds support for local path provisioner
Helpful for local development but also in production workloads (once the
permission model is worked out) where you have redundancy built into the
software uses the PVCs (e.g. database cluster with synchronous
replication)
* feat(local-path-provisioner): adds debug flag, image tag group var
* fix(local-path-provisioner): moves image repo/tag to download role
* test(gce_centos7-flannel): enables local-path-provisioner in test case
* fix(addons): add image repo/tag to commented default values
* fix(local-path-provisioner): typo in jinja template for local path provisioner
* style(local-path-provisioner): debug flag condition re-formatted
* fix(local-path-provisioner): adds missing default value for debug flag
* fix(local-path-provisioner): syntax fix for debug if condition end
* fix(local-path-provisioner): jinja template syntax: if condition white space
This was already approved in #4106 but there are CI issues
with that PR due to references to kubernetes incubator.
After upgrading to Kubespray 2.8.1 with Kubeadm enabled Rook
Ceph volume provision failed due to the flexvolume plugin dir not
being correct. Adding the var fixed the issue
* Adding ability to maintain existing Encryption Secrets at Rest.
If secrets_encryption.yaml is present it will not be overriten with a new kube_encrypt_token.
This should allow for it to be set ahead of a playbook running or maintain it if cluster.yml is ran on the same cluster and the ansible host does not have access to the secrets.
* Setting existing kube_encrypt_token across all master nodes in case it was missing in one or more nodes.
This fixes an issue where the `nodename` in calico's cni config json can fall out of sync with the k8s node name used by the calico pod if `kube_override_hostname` is set
Currently, the task `container_download | download images for kubeadm config images` fetches etcd image even though it's not required (etcd is bootstrapped by kubespray, not kubeadm).
`kubeadm-images.yaml` is only a subset of `kubeadm-config.yaml`, therefore ``kubeadm config images pull` will try to get all this list (including etcd)
```
# kubeadm config images list --config /etc/kubernetes/kubeadm-images.yaml
k8s.gcr.io/kube-apiserver:v1.13.2
k8s.gcr.io/kube-controller-manager:v1.13.2
k8s.gcr.io/kube-scheduler:v1.13.2
k8s.gcr.io/kube-proxy:v1.13.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.6
```
When using the `kubeadm-config.yaml` though, it doesn't list etcd image:
```
# kubeadm config images list --config /etc/kubernetes/kubeadm-config.yaml
k8s.gcr.io/kube-apiserver:v1.13.2
k8s.gcr.io/kube-controller-manager:v1.13.2
k8s.gcr.io/kube-scheduler:v1.13.2
k8s.gcr.io/kube-proxy:v1.13.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/coredns:1.2.6
```
This change just adds the etcd endpoints in the `kubeadm-images.yaml` to give a hint to kubeadm it doesn't need etcd image for its boostrapping as etcd is "external".
I confess it is a ugly hack, a better way would be to use a single `kubeadm-config.yaml` for both tasks, but they are triggered by different roles (`kubeadm-images.yaml` is used by download, `kubeadm-config.yaml` by kubernetes/master) at different steps and I didn't want to refactor too many things to prevent breakage.
This is specially useful for offline installation where a whitelist of container images is mirrored on a local private container registry. `k8s.gcr.io/etcd` and `quay.io/coreos/etcd` are two different repositories hosting the same images but using *different tags*!
* coreos/etcd:v3.2.24
* k8s.gcr.io/etcd:3.2.24 (note the missing 'v' in the tag name)
Fix issue where `kubeadm join` could wait forever for joining.
Fix issue where `kubeadm join` were not reaching the user, making
impossible to find the cause of the failure.
New behaviour is to first attempt to join without bypassing the
verifications checks and to display them if needed.
If this fails it still attempts to join by ignoring the check in
order to make previous behavior.
A timeout of 60 seconds is allocated for a joining.
Related-bug: #3973
* bootstrap: rework role
* support being called from a non-root user
* run some commands in check mode
* unify spelling/task names
* bootstrap: fix wording of comments for check_mode: false
* bootstrap: remove setup-pipelining task
* OCI subnet AD 2 is not required for CCM >= 0.7.0
Reorganize OCI provider to generate configuration, rather than pull
Add pull secret option to OCI cloud provider
* Updated oci example to document new parameters
This PR ensures that the e2fsprogs and xfsprogs packages are
installed on all Kubernetes nodes and that the packages are
the latest versions. It also ensures that the nodes can
create XFS filesystems when necessary, since not all distros
install xfsprogs by default.
e2fsprogs - ext2/ext3/ext4 file system utilities
xfsprogs - Utilities for managing the XFS filesystem
* Calico: Ability to define the default IPPool CIDR (instead of kube_pods_subnet)
* Documentation for calico_pool_cidr (and calico_advertise_cluster_ips which has been forgotten...)
* Set cluster DNS correctly in case of nodelocal dns cache
* Pass in cluster_ip based on dns mode
* Disable nodelocaldns by default
* Fix syntax error
* Fix syntax issue
* Add nodelocadns ip to vars of node installation
* Change location of nodelocaldns_ip
* Try to remove newlines from jinja template
* Add debug for config file
* Move parameter logic outside of template
* Adapt templates after feedback
* Remove debugging
Addressing the discussion started in #4064, this PR moves kubeadm and
hyperkube binaries to /usr/local/bin before running them on the master
nodes.
It is to address the case where local_release_dir points to /tmp
(kubespray default) and /tmp is mounted with noexec mode, preventing
any binaries to be run in that partition.
In role "node", we still move kubeadm to bin_dir only on the worker
nodes.
I know this is a bit hack.
If you use cloud LB, you can use kubeadm's controlPlaneEndpoint to configure kube-proxy's server field.
But for nginx-proxy, it didn't start when kubeadm init.
Looks like `epel_enabled` was not configured for the epel install in `bootstrap-centos.yml`. Also, there were no conditionals that would trigger bootstrap for RHEL.
* Use external LB IP for external api endpoint
Use loadbalancer_apiserver.address instead of apiserver_loadbalancer_domain_name for kudadm init --apiserver-advertise-address argument
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#options states apiserver-advertise-address needs to be a IPv4 or IPv6 address
* only use loadbalancer IP if it is defined
I found a potential use case where `writable` could be null and therfore
not treated like a boolean, so this adds an extra default statement to
avoid negating a non-boolean as boolean which would lead to undefined. refs #4020
Looks like the template is removing the trailing space between storage
class entries, and since CI only has one storage class we never hit this
issue. This change will prevent the yaml from printing on a single line
when multiple storage classes are defined.
In v1beta1 of `ClusterConfiguration` the extraVolumes `writable` field was changed to `readOnly` and its boolean value must be negated.
Also, the json field for `useHyperKubeImage` was incorrectly capitalized.
Right now we're consistently getting warnings about kubelet not found in
path during `kubeadm init`. We fixed this for `kubeadm join` in #3342, and this brings the change to init
as well.
- Fixed an issue where storage class host directories were looped
through excessive target hosts
- Fixes examples in the LVP `README.md` to use nested dicts instead of a
list of dicts
* Makes local volume provisioner more dynamic
* Correct variable name in local storage provisioner defaults
* Updates external-provisioner readme
* Updates variable naming to be more clear, more documentation, fixes sample inventory
* Variable refactor, untangled some jinja2 loops
* Corrects variable name
* No variable substitution in dict keys, replaced with anchor
* Fixes default storage_classes dict, inline docs
* Fixes spelling in inline docs
* Addresses comments in review
* Updates all the defaults
* Fix failing CI task
* Fixes external provisioner daemonset
* allows to override the bind addresses for controller-manager and scheduler
Useful for Prometheus metrics monitoring
* Add bind addr override support in kubeadm/v1beta1
Adds support for override of bind addresses for controller-manager
and scheduler in kubeadm/v1beta1
* Move location of bind address vars
* Remove double declaration of schedulerExtraArgs
The change implemented in #3908 remove line breaks for supplementary
addresses in kubeadm SANs, causing errors in the config file and
failure to bring cluster up. This commit reimplement line breaks in
between supplementary addresses.
- Creates and defaults an ansible variable for every configuration option in the `kubeproxy.config.k8s.io/v1alpha1` type spec
- Fixes vars that were orphaned by removing non-kubeadm
- Fixes previously harcoded kubeadm values
- Introduces a `main` directory for role default files per component (requires ansible 2.6.0+)
- Split out just `kube-proxy.yml` in this first effort
- Removes the kube-proxy server field patch task
We should continue to pull out other components from `main.yml` into their own defaults files as I did here for `defaults/main/kube-proxy.yml`. I hope for and will need others to join me in this refactoring across the project until each component config template has a matching role defaults file, with shared defaults in `kubespray-defaults` or `downloads`