Thought this wasn't required at first but I forgot there's no auto flush at the end of these tasks since the `kubernetes/master` role is not the end of the play.
* Fixes an issue where apiserver and friends (controller manager, scheduler) were prevented from restarting after manifests/secrets are changed. This occurred when a replaced kubelet doesn't reconcile new master manifests, which caused old master component versions to linger during deployment. In my case this was causing upgrades from k8s 1.6/1.7 -> k8s 1.8 to fail
* Improves transitions from kubelet container to host kubelet by preventing issues where kubelet container reappeared during the deployment
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled). It's working, but so far I have had to:
1. Make the `uri` module "Wait for apiserver up" checks use `kube_apiserver_port` (HTTPS)
2. Add apiserver client cert/key to the "Wait for apiserver up" checks
3. Update apiserver liveness probe to use HTTPS ports
4. Set `kube_api_anonymous_auth` to true to allow liveness probe to hit apiserver's /healthz over HTTPS (livenessProbes can't use client cert/key unfortunately)
5. RBAC has to be enabled. Anonymous requests are in the `system:unauthenticated` group which is granted access to /healthz by one of RBAC's default ClusterRoleBindings. An equivalent ABAC rule could allow this as well.
Changes 1 and 2 should work for everyone, but 3, 4, and 5 require new coupling of currently independent configuration settings. So I also added a new settings check.
Options:
1. The problem goes away if you have both anonymous-auth and RBAC enabled. This is how kubeadm does it. This may be the best way to go since RBAC is already on by default but anonymous auth is not.
2. Include conditional templates to set a different liveness probe for possible combinations of `kube_apiserver_insecure_port = 0`, RBAC, and `kube_api_anonymous_auth` (won't be possible to cover every case without a guaranteed authorizer for the secure port)
3. Use basic auth headers for the liveness probe (I really don't like this, it adds a new dependency on basic auth which I'd also like to leave independently configurable, and it requires encoded passwords in the apiserver manifest)
Option 1 seems like the clear winner to me, but is there a reason we wouldn't want anonymous-auth on by default? The apiserver binary defaults anonymous-auth to true, but kubespray's default was false.
* Change deprecated vagrant ansible flag 'sudo' to 'become'
* Workaround ansible bug where access var via dict doesn't get real value
When accessing a variable via it's name "{{ foo }}" its value is
retrieved. But when the variable value is retrieved via the vars-dict
"{{ vars['foo'] }}" this doesn't resolve the expression of the variable
any more due to a bug. So e.g. a expression foo="{{ 1 == 1 }}" isn't
longer resolved but just returned as string "1 == 1".
* Make file yamllint complient
Some time ago I think the hardcoded `/var/lib/docker` was required, but kubelet running in a container has been aware of the Docker path since at least as far back as k8s 1.6.
Without this change, you see a large number of errors in the kubelet logs if you installed with a non-default `docker_daemon_graph`
* Refactor downloads to use download role directly
Also disable fact delegation so download delegate works acros OSes.
* clean up bools and ansible_os_family conditionals
* Update main.yml
Needs to set up resolv.conf before updating Yum cache otherwise no name resolution available (resolv.conf empty).
* Update main.yml
Removing trailing spaces
* Add possibility to insert more ip adresses in certificates
* Add newline at end of files
* Move supp ip parameters to k8s-cluster group file
* Add supplementary addresses in kubeadm master role
* Improve openssl indexes
* don't try to install this rpm on fedora atomic
* add docker 1.13.1 for fedora
* built-in docker unit file is sufficient, as tested on both fedora and centos atomic
* Change file used to check kubeadm upgrade method
Test for ca.crt instead of admin.conf because admin.conf
is created during normal deployment.
* more fixes for upgrade
* Rename dns_server to dnsmasq_dns_server so that it includes role prefix
as the var name is generic and conflicts when integrating with existing ansible automation.
* Enable selinux state to be configurable with new var preinstall_selinux_state
PID namespace sharing is disabled only in Kubernetes 1.7.
Explicitily enabling it by default could help reduce unexpected
results when upgrading to or downgrading from 1.7.
This follows pull request #1677, adding the cgroup-driver
autodetection also for kubeadm way of deploying.
Info about this and the possibility to override is added to the docs.
Red Hat family platforms run docker daemon with `--exec-opt
native.cgroupdriver=systemd`. When kubespray tried to start kubelet
service, it failed with:
Error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
Setting kubelet's cgroup driver to the correct value for the platform
fixes this issue. The code utilizes autodetection of docker's cgroup
driver, as different RPMs for the same distro may vary in that regard.
New files: /etc/kubernetes/admin.conf
/root/.kube/config
$GITDIR/artifacts/{kubectl,admin.conf}
Optional method to download kubectl and admin.conf if
kubeconfig_lcoalhost is set to true (default false)
* kubeadm support
* move k8s master to a subtask
* disable k8s secrets when using kubeadm
* fix etcd cert serial var
* move simple auth users to master role
* make a kubeadm-specific env file for kubelet
* add non-ha CI job
* change ci boolean vars to json format
* fixup
* Update create-gce.yml
* Update create-gce.yml
* Update create-gce.yml
This sets br_netfilter and net.bridge.bridge-nf-call-iptables sysctl from a single play before kube-proxy is first ran instead of from the flannel and weave network_plugin roles after kube-proxy is started
* using separated vault roles for generate certs with different `O` (Organization) subject field;
* configure vault roles for issuing certificates with different `CN` (Common name) subject field;
* set `CN` and `O` to `kubernetes` and `etcd` certificates;
* vault/defaults vars definition was simplified;
* vault dirs variables defined in kubernetes-defaults foles for using
shared tasks in etcd and kubernetes/secrets roles;
* upgrade vault to 0.8.1;
* generate random vault user password for each role by default;
* fix `serial` file name for vault certs;
* move vault auth request to issue_cert tasks;
* enable `RBAC` in vault CI;
* Added update CA trust step for etcd and kube/secrets roles
* Added load_balancer_domain_name to certificate alt names if defined. Reset CA's in RedHat os.
* Rename kube-cluster-ca.crt to vault-ca.crt, we need separated CA`s for vault, etcd and kube.
* Vault role refactoring, remove optional cert vault auth because not not used and worked. Create separate CA`s fro vault and etcd.
* Fixed different certificates set for vault cert_managment
* Update doc/vault.md
* Fixed condition create vault CA, wrong group
* Fixed missing etcd_cert_path mount for rkt deployment type. Distribute vault roles for all vault hosts
* Removed wrong when condition in create etcd role vault tasks.
* Updates Controller Manager/Kubelet with Flannel's required configuration for CNI
* Removes old Flannel installation
* Install CNI enabled Flannel DaemonSet/ConfigMap/CNI bins and config (with portmap plugin) on host
* Uses RBAC if enabled
* Fixed an issue that could occur if br_netfilter is not a module and net.bridge.bridge-nf-call-iptables sysctl was not set
* Adding yaml linter to ci check
* Minor linting fixes from yamllint
* Changing CI to install python pkgs from requirements.txt
- adding in a secondary requirements.txt for tests
- moving yamllint to tests requirements
If Kubernetes > 1.6 register standalone master nodes w/ a
node-role.kubernetes.io/master=:NoSchedule taint to allow
for more flexible scheduling rather than just marking unschedulable.
Change kubelet deploy mode to host
Enable cri and qos per cgroup for kubelet
Update CoreOS images
Add upgrade hook for switching from kubelet deployment from docker to host.
Bump machine type for ubuntu-rkt-sep
* Added custom ips to etcd vault distributed certificates
* Added custom ips to kube-master vault distributed certificates
* Added comment about issue_cert_copy_ca var in vault/issue_cert role file
* Generate kube-proxy, controller-manager and scheduler certificates by vault
* Revert "Disable vault from CI (#1546)"
This reverts commit 781f31d2b8.
* Fixed upgrade cluster with vault cert manager
* Remove vault dir in reset playbook
* Bump tag for upgrade CI, fix netchecker upgrade
netchecker-server was changed from pod to deployment, so
we need an upgrade hook for it.
CI now uses v2.1.1 as a basis for upgrade.
* Fix upgrades for certs from non-rbac to rbac