c12s-kubespray/roles/kubernetes-apps/ansible/tasks/main.yml

102 lines
3.1 KiB
YAML
Raw Normal View History

2016-09-01 17:01:15 +00:00
---
- name: Kubernetes Apps | Wait for kube-apiserver
uri:
Support for disabling apiserver insecure port This allows `kube_apiserver_insecure_port` to be set to 0 (disabled). It's working, but so far I have had to: 1. Make the `uri` module "Wait for apiserver up" checks use `kube_apiserver_port` (HTTPS) 2. Add apiserver client cert/key to the "Wait for apiserver up" checks 3. Update apiserver liveness probe to use HTTPS ports 4. Set `kube_api_anonymous_auth` to true to allow liveness probe to hit apiserver's /healthz over HTTPS (livenessProbes can't use client cert/key unfortunately) 5. RBAC has to be enabled. Anonymous requests are in the `system:unauthenticated` group which is granted access to /healthz by one of RBAC's default ClusterRoleBindings. An equivalent ABAC rule could allow this as well. Changes 1 and 2 should work for everyone, but 3, 4, and 5 require new coupling of currently independent configuration settings. So I also added a new settings check. Options: 1. The problem goes away if you have both anonymous-auth and RBAC enabled. This is how kubeadm does it. This may be the best way to go since RBAC is already on by default but anonymous auth is not. 2. Include conditional templates to set a different liveness probe for possible combinations of `kube_apiserver_insecure_port = 0`, RBAC, and `kube_api_anonymous_auth` (won't be possible to cover every case without a guaranteed authorizer for the secure port) 3. Use basic auth headers for the liveness probe (I really don't like this, it adds a new dependency on basic auth which I'd also like to leave independently configurable, and it requires encoded passwords in the apiserver manifest) Option 1 seems like the clear winner to me, but is there a reason we wouldn't want anonymous-auth on by default? The apiserver binary defaults anonymous-auth to true, but kubespray's default was false.
2017-11-06 20:01:10 +00:00
url: "{{ kube_apiserver_endpoint }}/healthz"
validate_certs: no
client_cert: "{{ kube_cert_dir }}/apiserver.pem"
client_key: "{{ kube_cert_dir }}/apiserver-key.pem"
register: result
until: result.status == 200
retries: 10
delay: 2
when: inventory_hostname == groups['kube-master'][0]
- name: Kubernetes Apps | Delete old kubedns resources
kube:
name: "kubedns"
namespace: "{{ system_namespace }}"
kubectl: "{{bin_dir}}/kubectl"
resource: "{{ item }}"
state: absent
with_items: ['deploy', 'svc']
tags:
- upgrade
- name: Kubernetes Apps | Delete kubeadm kubedns
kube:
name: "kubedns"
namespace: "{{ system_namespace }}"
kubectl: "{{bin_dir}}/kubectl"
resource: "deploy"
state: absent
when:
- kubeadm_enabled|default(false)
- kubeadm_init.changed|default(false)
- inventory_hostname == groups['kube-master'][0]
2016-09-01 17:01:15 +00:00
- name: Kubernetes Apps | Lay Down KubeDNS Template
template:
src: "{{item.file}}"
dest: "{{kube_config_dir}}/{{item.file}}"
2016-09-01 17:01:15 +00:00
with_items:
- {name: kube-dns, file: kubedns-sa.yml, type: sa}
- {name: kube-dns, file: kubedns-deploy.yml.j2, type: deployment}
- {name: kube-dns, file: kubedns-svc.yml, type: svc}
2017-06-27 04:27:25 +00:00
- {name: kubedns-autoscaler, file: kubedns-autoscaler-sa.yml, type: sa}
- {name: kubedns-autoscaler, file: kubedns-autoscaler-clusterrole.yml, type: clusterrole}
- {name: kubedns-autoscaler, file: kubedns-autoscaler-clusterrolebinding.yml, type: clusterrolebinding}
- {name: kubedns-autoscaler, file: kubedns-autoscaler.yml.j2, type: deployment}
2016-09-01 17:01:15 +00:00
register: manifests
2017-06-27 04:27:25 +00:00
when:
- dns_mode != 'none' and inventory_hostname == groups['kube-master'][0]
2017-07-17 11:28:09 +00:00
- rbac_enabled or item.type not in rbac_resources
tags:
- dnsmasq
2017-06-27 04:27:25 +00:00
# see https://github.com/kubernetes/kubernetes/issues/45084, only needed for "old" kube-dns
- name: Kubernetes Apps | Patch system:kube-dns ClusterRole
command: >
{{bin_dir}}/kubectl patch clusterrole system:kube-dns
--patch='{
"rules": [
{
"apiGroups" : [""],
"resources" : ["endpoints", "services"],
"verbs": ["list", "watch", "get"]
}
]
}'
when:
- dns_mode != 'none' and inventory_hostname == groups['kube-master'][0]
- rbac_enabled and kubedns_version|version_compare("1.11.0", "<", strict=True)
tags:
- dnsmasq
2016-09-01 17:01:15 +00:00
- name: Kubernetes Apps | Start Resources
kube:
name: "{{item.item.name}}"
namespace: "{{ system_namespace }}"
2016-09-01 17:01:15 +00:00
kubectl: "{{bin_dir}}/kubectl"
resource: "{{item.item.type}}"
filename: "{{kube_config_dir}}/{{item.item.file}}"
state: "latest"
2016-09-01 17:01:15 +00:00
with_items: "{{ manifests.results }}"
when:
- dns_mode != 'none'
- inventory_hostname == groups['kube-master'][0]
- not item|skipped
tags:
- dnsmasq
- name: Kubernetes Apps | Netchecker
include: tasks/netchecker.yml
when: deploy_netchecker
tags:
- netchecker
- name: Kubernetes Apps | Dashboard
include: tasks/dashboard.yml
when: dashboard_enabled
tags:
- dashboard