This follows pull request #1677, adding the cgroup-driver
autodetection also for kubeadm way of deploying.
Info about this and the possibility to override is added to the docs.
Red Hat family platforms run docker daemon with `--exec-opt
native.cgroupdriver=systemd`. When kubespray tried to start kubelet
service, it failed with:
Error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
Setting kubelet's cgroup driver to the correct value for the platform
fixes this issue. The code utilizes autodetection of docker's cgroup
driver, as different RPMs for the same distro may vary in that regard.
New files: /etc/kubernetes/admin.conf
/root/.kube/config
$GITDIR/artifacts/{kubectl,admin.conf}
Optional method to download kubectl and admin.conf if
kubeconfig_lcoalhost is set to true (default false)
* kubeadm support
* move k8s master to a subtask
* disable k8s secrets when using kubeadm
* fix etcd cert serial var
* move simple auth users to master role
* make a kubeadm-specific env file for kubelet
* add non-ha CI job
* change ci boolean vars to json format
* fixup
* Update create-gce.yml
* Update create-gce.yml
* Update create-gce.yml
* Fix netchecker update side effect
kubectl apply should only be used on resources created
with kubectl apply. To workaround this, we should apply
the old manifest before upgrading it.
* Update 030_check-network.yml
This sets br_netfilter and net.bridge.bridge-nf-call-iptables sysctl from a single play before kube-proxy is first ran instead of from the flannel and weave network_plugin roles after kube-proxy is started
the uploads.yml playbook was broken with checksum mismatch errors in
various kubespray commits, for example, 3bfad5ca73
which updated the version from 3.0.6 to 3.0.17 without updating the
corresponding checksums.
* using separated vault roles for generate certs with different `O` (Organization) subject field;
* configure vault roles for issuing certificates with different `CN` (Common name) subject field;
* set `CN` and `O` to `kubernetes` and `etcd` certificates;
* vault/defaults vars definition was simplified;
* vault dirs variables defined in kubernetes-defaults foles for using
shared tasks in etcd and kubernetes/secrets roles;
* upgrade vault to 0.8.1;
* generate random vault user password for each role by default;
* fix `serial` file name for vault certs;
* move vault auth request to issue_cert tasks;
* enable `RBAC` in vault CI;
* Use kubectl apply instead of create/replace
Disable checks for existing resources to speed up execution.
* Fix non-rbac deployment of resources as a list
* Fix autoscaler tolerations field
* set all kube resources to state=latest
* Update netchecker and weave
* Added update CA trust step for etcd and kube/secrets roles
* Added load_balancer_domain_name to certificate alt names if defined. Reset CA's in RedHat os.
* Rename kube-cluster-ca.crt to vault-ca.crt, we need separated CA`s for vault, etcd and kube.
* Vault role refactoring, remove optional cert vault auth because not not used and worked. Create separate CA`s fro vault and etcd.
* Fixed different certificates set for vault cert_managment
* Update doc/vault.md
* Fixed condition create vault CA, wrong group
* Fixed missing etcd_cert_path mount for rkt deployment type. Distribute vault roles for all vault hosts
* Removed wrong when condition in create etcd role vault tasks.
* Updates Controller Manager/Kubelet with Flannel's required configuration for CNI
* Removes old Flannel installation
* Install CNI enabled Flannel DaemonSet/ConfigMap/CNI bins and config (with portmap plugin) on host
* Uses RBAC if enabled
* Fixed an issue that could occur if br_netfilter is not a module and net.bridge.bridge-nf-call-iptables sysctl was not set
* Adding yaml linter to ci check
* Minor linting fixes from yamllint
* Changing CI to install python pkgs from requirements.txt
- adding in a secondary requirements.txt for tests
- moving yamllint to tests requirements
If Kubernetes > 1.6 register standalone master nodes w/ a
node-role.kubernetes.io/master=:NoSchedule taint to allow
for more flexible scheduling rather than just marking unschedulable.
Change kubelet deploy mode to host
Enable cri and qos per cgroup for kubelet
Update CoreOS images
Add upgrade hook for switching from kubelet deployment from docker to host.
Bump machine type for ubuntu-rkt-sep
* Added custom ips to etcd vault distributed certificates
* Added custom ips to kube-master vault distributed certificates
* Added comment about issue_cert_copy_ca var in vault/issue_cert role file
* Generate kube-proxy, controller-manager and scheduler certificates by vault
* Revert "Disable vault from CI (#1546)"
This reverts commit 781f31d2b8.
* Fixed upgrade cluster with vault cert manager
* Remove vault dir in reset playbook
* Bump tag for upgrade CI, fix netchecker upgrade
netchecker-server was changed from pod to deployment, so
we need an upgrade hook for it.
CI now uses v2.1.1 as a basis for upgrade.
* Fix upgrades for certs from non-rbac to rbac
This does not address per-node certs and scheduler/proxy/controller-manager
component certs which are now required. This should be handled in a
follow-up patch.
Making fluentd.conf as configmap to change configuration.
Change elasticsearch rc to deployment.
Having installed previous elastaicsearch as rc, first should delete that.
* Make yum repos used for installing docker rpms configurable
* TasksMax is only supported in systemd version >= 226
* Change to systemd file should restart docker