c12s-kubespray/roles/kubernetes/node/templates/kubelet.kubeadm.env.j2

74 lines
3.7 KiB
Plaintext
Raw Normal View History

### Upstream source https://github.com/kubernetes/release/blob/master/debian/xenial/kubeadm/channel/stable/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
### All upstream values should be present in this file
# logging to stderr means we get it in the systemd journal
2017-10-30 11:33:02 +00:00
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v={{ kube_log_level }}"
2015-10-03 20:19:50 +00:00
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address={{ kubelet_bind_address }} --node-ip={{ kubelet_address }}"
2015-10-03 20:19:50 +00:00
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
2015-10-03 20:19:50 +00:00
# You may leave this blank to use the actual hostname
{% if kube_override_hostname %}
KUBELET_HOSTNAME="--hostname-override={{ kube_override_hostname }}"
{% endif %}
{# Base kubelet args #}
{% set kubelet_args_base -%}
{# start kubeadm specific settings #}
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \
--kubeconfig={{ kube_config_dir }}/kubelet.conf \
--require-kubeconfig \
--authorization-mode=Webhook \
--client-ca-file={{ kube_cert_dir }}/ca.crt \
--pod-manifest-path={{ kube_manifest_dir }} \
--cadvisor-port={{ kube_cadvisor_port }} \
{# end kubeadm specific settings #}
--pod-infra-container-image={{ pod_infra_image_repo }}:{{ pod_infra_image_tag }} \
--kube-reserved cpu={{ kubelet_cpu_limit }},memory={{ kubelet_memory_limit|regex_replace('Mi', 'M') }} \
--node-status-update-frequency={{ kubelet_status_update_frequency }} \
--cgroup-driver={{ kubelet_cgroup_driver|default(kubelet_cgroup_driver_detected) }} \
--docker-disable-shared-pid={{ kubelet_disable_shared_pid }} \
--anonymous-auth=false \
{% if kube_version | version_compare('v1.8', '<') %}
--experimental-fail-swap-on={{ kubelet_fail_swap_on|default(true)}} \
{% else %}
--fail-swap-on={{ kubelet_fail_swap_on|default(true)}} \
{% endif %}
{% endset %}
{# Node reserved CPU/memory #}
{% if is_kube_master|bool %}
{% set kubelet_reserve %}--kube-reserved cpu={{ kubelet_master_cpu_limit }},memory={{ kubelet_master_memory_limit|regex_replace('Mi', 'M') }}{% endset %}
{% else %}
{% set kubelet_reserve %}--kube-reserved cpu={{ kubelet_cpu_limit }},memory={{ kubelet_memory_limit|regex_replace('Mi', 'M') }}{% endset %}
{% endif %}
{# DNS settings for kubelet #}
{% if dns_mode == 'kubedns' %}
{% set kubelet_args_cluster_dns %}--cluster-dns={{ skydns_server }}{% endset %}
{% elif dns_mode == 'dnsmasq_kubedns' %}
2017-10-30 11:23:24 +00:00
{% set kubelet_args_cluster_dns %}--cluster-dns={{ dnsmasq_dns_server }}{% endset %}
{% else %}
{% set kubelet_args_cluster_dns %}{% endset %}
{% endif %}
{% set kubelet_args_dns %}{{ kubelet_args_cluster_dns }} --cluster-domain={{ dns_domain }} --resolv-conf={{ kube_resolv_conf }}{% endset %}
KUBELET_ARGS="{{ kubelet_args_base }} {{ kubelet_args_dns }} {{ kubelet_reserve }}"
contiv network support (#1914) * Add Contiv support Contiv is a network plugin for Kubernetes and Docker. It supports vlan/vxlan/BGP/Cisco ACI technologies. It support firewall policies, multiple networks and bridging pods onto physical networks. * Update contiv version to 1.1.4 Update contiv version to 1.1.4 and added SVC_SUBNET in contiv-config. * Load openvswitch module to workaround on CentOS7.4 * Set contiv cni version to 0.1.0 Correct contiv CNI version to 0.1.0. * Use kube_apiserver_endpoint for K8S_API_SERVER Use kube_apiserver_endpoint as K8S_API_SERVER to make contiv talks to a available endpoint no matter if there's a loadbalancer or not. * Make contiv use its own etcd Before this commit, contiv is using a etcd proxy mode to k8s etcd, this work fine when the etcd hosts are co-located with contiv etcd proxy, however the k8s peering certs are only in etcd group, as a result the etcd-proxy is not able to peering with the k8s etcd on etcd group, plus the netplugin is always trying to find the etcd endpoint on localhost, this will cause problem for all netplugins not runnign on etcd group nodes. This commit make contiv uses its own etcd, separate from k8s one. on kube-master nodes (where net-master runs), it will run as leader mode and on all rest nodes it will run as proxy mode. * Use cp instead of rsync to copy cni binaries Since rsync has been removed from hyperkube, this commit changes it to use cp instead. * Make contiv-etcd able to run on master nodes * Add rbac_enabled flag for contiv pods * Add contiv into CNI network plugin lists * migrate contiv test to tests/files Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com> * Add required rules for contiv netplugin * Better handling json return of fwdMode * Make contiv etcd port configurable * Use default var instead of templating * roles/download/defaults/main.yml: use contiv 1.1.7 Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>
2017-11-29 14:24:16 +00:00
{% if kube_network_plugin is defined and kube_network_plugin in ["calico", "canal", "flannel", "weave", "contiv"] %}
2017-03-13 17:13:21 +00:00
KUBELET_NETWORK_PLUGIN="--network-plugin=cni --network-plugin-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
{% elif kube_network_plugin is defined and kube_network_plugin == "cloud" %}
KUBELET_NETWORK_PLUGIN="--hairpin-mode=promiscuous-bridge --network-plugin=kubenet"
2015-10-03 20:19:50 +00:00
{% endif %}
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"
{% if cloud_provider is defined and cloud_provider in ["openstack", "azure", "vsphere"] %}
KUBELET_CLOUDPROVIDER="--cloud-provider={{ cloud_provider }} --cloud-config={{ kube_config_dir }}/cloud_config"
{% elif cloud_provider is defined and cloud_provider == "aws" %}
KUBELET_CLOUDPROVIDER="--cloud-provider={{ cloud_provider }}"
{% else %}
KUBELET_CLOUDPROVIDER=""
{% endif %}
PATH={{ bin_dir }}:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin