c12s-kubespray/roles/kubernetes/master/templates/manifests/kube-controller-manager.manifest.j2
Bogdan Dobrelya 48e77cd8bb Drop linux capabilities and rework users/groups
* Drop linux capabilities for unprivileged containerized
  worlkoads Kargo configures for deployments.
* Configure required securityContext/user/group/groups for kube
  components' static manifests, etcd, calico-rr and k8s apps,
  like dnsmasq daemonset.
* Rework cloud-init (etcd) users creation for CoreOS.
* Fix nologin paths, adjust defaults for addusers role and ensure
  supplementary groups membership added for users.
* Add netplug user for network plugins (yet unused by privileged
  networking containers though).
* Grant the kube and netplug users read access for etcd certs via
  the etcd certs group.
* Grant group read access to kube certs via the kube cert group.
* Remove priveleged mode for calico-rr and run it under its uid/gid
  and supplementary etcd_cert group.
* Adjust docs.
* Align cpu/memory limits and dropped caps with added rkt support
  for control plane.

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-01-20 08:50:42 +01:00

78 lines
2.5 KiB
Django/Jinja

apiVersion: v1
kind: Pod
metadata:
name: kube-controller-manager
namespace: {{system_namespace}}
labels:
k8s-app: kube-controller
spec:
hostNetwork: true
containers:
- name: kube-controller-manager
image: {{ hyperkube_image_repo }}:{{ hyperkube_image_tag }}
imagePullPolicy: {{ k8s_image_pull_policy }}
securityContext:
runAsUser: {{ kubelet_user_id }}
fsGroup: {{ kubelet_group_id }}
supplementalGroups:
- {{ kube_cert_group_id }}
- {{ etcd_cert_group_id }}
capabilities:
drop:
{% for c in apps_drop_cap %}
- {{ c.upper() }}
{% endfor %}
resources:
limits:
cpu: {{ kube_controller_cpu_limit }}
memory: {{ kube_controller_memory_limit }}
requests:
cpu: {{ kube_controller_cpu_requests }}
memory: {{ kube_controller_memory_requests }}
command:
- /hyperkube
- controller-manager
- --master={{ kube_apiserver_endpoint }}
- --leader-elect=true
- --service-account-private-key-file={{ kube_cert_dir }}/apiserver-key.pem
- --root-ca-file={{ kube_cert_dir }}/ca.pem
- --cluster-signing-cert-file={{ kube_cert_dir }}/ca.pem
- --cluster-signing-key-file={{ kube_cert_dir }}/ca-key.pem
- --enable-hostpath-provisioner={{ kube_hostpath_dynamic_provisioner }}
- --v={{ kube_log_level }}
{% if cloud_provider is defined and cloud_provider in ["openstack", "azure"] %}
- --cloud-provider={{cloud_provider}}
- --cloud-config={{ kube_config_dir }}/cloud_config
{% elif cloud_provider is defined and cloud_provider == "aws" %}
- --cloud-provider={{cloud_provider}}
{% endif %}
{% if kube_network_plugin is defined and kube_network_plugin == 'cloud' %}
- --allocate-node-cidrs=true
- --configure-cloud-routes=true
- --cluster-cidr={{ kube_pods_subnet }}
{% endif %}
livenessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: 10252
initialDelaySeconds: 30
timeoutSeconds: 10
volumeMounts:
- mountPath: {{ kube_cert_dir }}
name: ssl-certs-kubernetes
readOnly: true
{% if cloud_provider is defined and cloud_provider in ["openstack", "azure"] %}
- mountPath: {{ kube_config_dir }}/cloud_config
name: cloudconfig
readOnly: true
{% endif %}
volumes:
- hostPath:
path: {{ kube_cert_dir }}
name: ssl-certs-kubernetes
{% if cloud_provider is defined and cloud_provider in ["openstack", "azure"] %}
- hostPath:
path: {{ kube_config_dir }}/cloud_config
name: cloudconfig
{% endif %}