c12s-kubespray/roles/kubernetes/master/defaults/main.yml
Bogdan Dobrelya 48e77cd8bb Drop linux capabilities and rework users/groups
* Drop linux capabilities for unprivileged containerized
  worlkoads Kargo configures for deployments.
* Configure required securityContext/user/group/groups for kube
  components' static manifests, etcd, calico-rr and k8s apps,
  like dnsmasq daemonset.
* Rework cloud-init (etcd) users creation for CoreOS.
* Fix nologin paths, adjust defaults for addusers role and ensure
  supplementary groups membership added for users.
* Add netplug user for network plugins (yet unused by privileged
  networking containers though).
* Grant the kube and netplug users read access for etcd certs via
  the etcd certs group.
* Grant group read access to kube certs via the kube cert group.
* Remove priveleged mode for calico-rr and run it under its uid/gid
  and supplementary etcd_cert group.
* Adjust docs.
* Align cpu/memory limits and dropped caps with added rkt support
  for control plane.

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-01-20 08:50:42 +01:00

43 lines
1.2 KiB
YAML

# An experimental dev/test only dynamic volumes provisioner,
# for PetSets. Works for kube>=v1.3 only.
kube_hostpath_dynamic_provisioner: "false"
# change to 0.0.0.0 to enable insecure access from anywhere (not recommended)
kube_apiserver_insecure_bind_address: 127.0.0.1
# A port range to reserve for services with NodePort visibility.
# Inclusive at both ends of the range.
kube_apiserver_node_port_range: "30000-32767"
# ETCD cert dir for connecting apiserver to etcd
etcd_config_dir: /etc/ssl/etcd
etcd_cert_dir: "{{ etcd_config_dir }}/ssl"
# Linux capabilities to be dropped for k8s apps ran by container engines
apps_drop_cap:
- chown
- dac_override
- fowner
- fsetid
- kill
- setgid
- setuid
- setpcap
- sys_chroot
- mknod
- audit_write
- setfcap
# Limits for kube components
kube_controller_memory_limit: 512M
kube_controller_cpu_limit: 250m
kube_controller_memory_requests: 170M
kube_controller_cpu_requests: 100m
kube_scheduler_memory_limit: 512M
kube_scheduler_cpu_limit: 250m
kube_scheduler_memory_requests: 170M
kube_scheduler_cpu_requests: 100m
kube_apiserver_memory_limit: 2000M
kube_apiserver_cpu_limit: 800m
kube_apiserver_memory_requests: 256M
kube_apiserver_cpu_requests: 300m