* Remove variables defined in download role. Fixes#3799
* Cleanup some more variables
* Fix bad templating
* Minor fix
* Add dashboard to download role. Fixes#3736
* Set configure-cloud-routes=false as default if no network plugin is used
As configure-cloud-routes default value is `true`, so it need to be set to `false` when not required to avoid error messages like:
"Couldn't reconcile node routes: error listing routes: unable to find route table for AWS cluster"
on, for example, AWS installations that don't use cloud native routing.
* Update kube-controller-manager.manifest.j2
remove extra spaces
Introduced variable node_taints which can be set in inventory for
specific hosts or in group_vars, which generates --register-with-taints
command line argument for kubelet.
Introduced variable `ingress_nginx_tolerations` to set custom
tolerations for Ingress nginx daemonset, to be able to schedule
ingress-nginx on dedicated nodes with taints.
* Update defaults to match k8s 1.12 suggestions
* Test if Netchecker works with node ip instead of localhost
* Update defaults to ipvs and coredns
* Update defaults for kube_apiserver_insecure_port
* Update main.yaml
When `ansible_user` is not root, using `-b` option.
And with `download_run_once` and `download_localhost` set `true`.
Ansible will executes `container_download | upload container images to nodes` task.
It uses rsync to upload images to `/tmp/release/container/`, but the
`container` directory owned by `root`.
Now the `kubespray-aws-inventory.py` script always set a node_labels key
to ansible_host.
When AWS instance did not set property labels, it would be an empty
string.
The TASK `Write kubelet config file (kubeadm or non-kubeadm)` will
failed with a msg:
`AnsibleUndefinedVariable: 'unicode object' has no attribute 'items'`.
* Support Metrics Server as addon (#3560).
* Update metrics server v0.3.1.
* Add metrics server test.
* Replace metrics server manifests with kubernetes/cluster/addons's.
* Modify metrics server manifests for kubespray.
* Follow PR#3558 node label node-role.kubernetes.io/master change
* Fix metrics server parameters base_metrics_server_... to metrics_server_...
* Fix too hard corded metrics_server_memory_per_node
* Add configurable insecure tls for metrics-apiservice
* Downloadable addon-resizer and extract parameter as variables
* Remove metrics server version from deployment name
* Metrics Server work when all masters has node role
* Download metrics-server and add-resizer container only on master
* ServiceAccount and ConfigMap is separated and fix application name
* Remove old metrics server clusterrole template
* Fix addon-resizer image specify
* Make InternalIP default for metrics_server_kubelet_preferred_address_types
Make InternalIP default because multiple preferrred address types does not work.
comparison that happens during `TASK [kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template]` where the `dns-autoscaler` template is deployed causes coredns to fail deployment. The error is caused by the variable `dns_prevent_single_point_failure` where an integer is being compared with a string. The resulting error:
```bash
'>' not supported between instances of 'int' and 'str'
```
prevents successful deployment of CoreDNS.
The change makes the comparison happen between integers and allows CoreDNS to succeed.
* Enable AutoScaler for CoreDNS
* Only use one template for dns autoscaler
* Rename a few variables for replicas and minimum pods
* Rename a few variables for replicas and minimum pods
* Remove replicas to make autoscale work
* Cleanup kubedns-autoscaler as it has been renamed
add prometheus annotations to calico-node if
calico_felix_prometheusmetricsenabled is enabled.
This will allow a kubernetes_sd to automaticly find the pods and start
scraping.
* Fix Failure talking to yum: Cannot find a valid baseurl for repo: base/7/x86_64 if Install packages in CentOS using proxy
* Add proxy to /etc/yum.conf if http_proxy is defined
* Added changes to clean up orphan containers and reload docker & kubelet directories.
* Added new files for cleaning up orphans and docker & kubelet directories
* Added new lines at the end of these files
* removed the trailing whitespaces from main.yml and clean-up.yml
* Updated as per the review comments
* Updated as per the review comments
* Removed service_facts and package_facts because they are not supported in ansible 2.4.0
* Corrected yaml syntax errors
* Removed the use of json_query filter and utilized selectattr
* Removed trailing spaces
* Changed the default value of docker_clean_up to false
* Added Changes to only include cleanup-docker-orphans.sh
* Reverted back changes done inside handler.
* Removed trailing spaces and made default value of docker_orphan_clean_up as true
* Reverted the default value of docker_orphan_clean_up as false
* Made the docker clean up as drop in
* Made the docker clean up as drop in
* Reverted the value of boolean docker_orphan_clean_up to false
* Converted ExecStop to ExecSTartPost. Removed the live restore check from the orphan script
* Adds support for Multus (multiple interfaces) CNI plugin
Multus is a latin word for "Multi". As the name suggests, it acts as a
Multi plugin in Kubernetes and provides multiple network interface
support in a pod. Multus uses the concept of invoking delegates by
grouping multiple plugins into delegates and invoking them in the
sequential order of the CNI configuration file provided in json format.
* Change CNI version (0.1.0->0.3.1) of Contiv to be compatible with Multus
When using resolvconf_mode host_resolvconf, there is an early DNS
config stage where Kubernetes cluster DNS is not injected for host
DNS intially. Later, the cluster DNS is enabled, but we do not
need to run every task from the kubernetes/preinstall role.