Remove kubedns and dnsmasq. Move dns_late phase after apps (#4406)

Both kubedns and dnsmasq modes are long not maintained.
We should run dns_late steps at the end because sshd
makes DNS lookups during Ansible run and has 2s timeouts
for each failed lookup trying to connect to coredns before
it is ready.
This commit is contained in:
Matthew Mosesohn 2019-04-01 22:32:34 +03:00 committed by Kubernetes Prow Robot
parent d71590bbd0
commit 5f12b7aedf
33 changed files with 37 additions and 837 deletions

View file

@ -109,16 +109,10 @@
- { role: kubespray-defaults}
- { role: network_plugin/calico/rr, tags: network }
- hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: dnsmasq, when: "dns_mode == 'dnsmasq_kubedns'", tags: dnsmasq }
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf, dns_late: true }
environment: "{{proxy_env}}"
- hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes-apps, tags: apps }
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf, dns_late: true }
environment: "{{proxy_env}}"

View file

@ -110,7 +110,6 @@ The following tags are defined in playbooks:
| calico | Network plugin Calico
| canal | Network plugin Canal
| cloud-provider | Cloud-provider related tasks
| dnsmasq | Configuring DNS stack for hosts and K8s apps
| docker | Configuring docker for hosts
| download | Fetching container images to a delegate host
| etcd | Configuring etcd cluster
@ -152,11 +151,11 @@ Example command to filter and apply only DNS configuration tasks and skip
everything else related to host OS configuration and downloading images of containers:
```
ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,dnsmasq,facts --skip-tags=download,bootstrap-os
ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,facts --skip-tags=download,bootstrap-os
```
And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files:
```
ansible-playbook -i inventory/sample/hosts.ini -e dnsmasq_dns_server='' cluster.yml --tags resolvconf
ansible-playbook -i inventory/sample/hosts.ini -e dns_mode='none' cluster.yml --tags resolvconf
```
And this prepares all container images locally (at the ansible runner node) without installing
or upgrading related stuff or trying to upload container to K8s cluster nodes:

View file

@ -20,10 +20,6 @@ ndots value to be used in ``/etc/resolv.conf``
It is important to note that multiple search domains combined with high ``ndots``
values lead to poor performance of DNS stack, so please choose it wisely.
The dnsmasq DaemonSet can accept lower ``ndots`` values and return NXDOMAIN
replies for [bogus internal FQDNS](https://github.com/kubernetes/kubernetes/issues/19634#issuecomment-253948954)
before it even hits the kubedns app. This enables dnsmasq to serve as a
protective, but still recursive resolver in front of kubedns.
#### searchdomains
Custom search domains to be added in addition to the cluster search domains (``default.svc.{{ dns_domain }}, svc.{{ dns_domain }}``).
@ -41,8 +37,7 @@ is not set, a default resolver is chosen (depending on cloud provider or 8.8.8.8
#### upstream_dns_servers
DNS servers to be added *after* the cluster DNS. Used by all ``resolvconf_mode`` modes. These serve as backup
DNS servers in early cluster deployment when no cluster DNS is available yet. These are also added as upstream
DNS servers used by ``dnsmasq`` (when deployed with ``dns_mode: dnsmasq_kubedns``).
DNS servers in early cluster deployment when no cluster DNS is available yet.
DNS modes supported by Kubespray
============================
@ -52,32 +47,20 @@ You can modify how Kubespray sets up DNS for your cluster with the variables ``d
## dns_mode
``dns_mode`` configures how Kubespray will setup cluster DNS. There are four modes available:
#### dnsmasq_kubedns
This installs an additional dnsmasq DaemonSet which gives more flexibility and lifts some
limitations (e.g. number of nameservers). Kubelet is instructed to use dnsmasq instead of kubedns/skydns.
It is configured to forward all DNS queries belonging to cluster services to kubedns/skydns. All
other queries are forwardet to the nameservers found in ``upstream_dns_servers`` or ``default_resolver``
#### kubedns
This does not install the dnsmasq DaemonSet and instructs kubelet to directly use kubedns/skydns for
all queries.
#### coredns (default)
This does not install the dnsmasq DaemonSet and instructs kubelet to directly use CoreDNS for
all queries.
This installs CoreDNS as the default cluster DNS for all queries.
#### coredns_dual
This does not install the dnsmasq DaemonSet and instructs kubelet to directly use CoreDNS for
all queries. It will also deploy a secondary CoreDNS stack
This installs CoreDNS as the default cluster DNS for all queries, plus a secondary CoreDNS stack.
#### manual
This does not install dnsmasq or kubedns, but allows you to specify
This does not install coredns, but allows you to specify
`manual_dns_server`, which will be configured on nodes for handling Pod DNS.
Use this method if you plan to install your own DNS server in the cluster after
initial deployment.
#### none
This does not install any of dnsmasq and kubedns/skydns. This basically disables cluster DNS completely and
This does not install any of DNS solution at all. This basically disables cluster DNS completely and
leaves you with a non functional cluster.
## resolvconf_mode
@ -103,7 +86,7 @@ The following dns options are added to the docker daemon
* attempts:2
For normal PODs, k8s will ignore these options and setup its own DNS settings for the PODs, taking
the --cluster_dns (either dnsmasq or kubedns, depending on dns_mode) kubelet option into account.
the --cluster_dns (either coredns or coredns_dual, depending on dns_mode) kubelet option into account.
For ``hostNetwork: true`` PODs however, k8s will let docker setup DNS settings. Docker containers which
are not started/managed by k8s will also use these docker options.
@ -115,7 +98,7 @@ servers, which in turn will forward queries to the system nameserver if required
#### host_resolvconf
This activates the classic Kubespray behaviour that modifies the hosts ``/etc/resolv.conf`` file and dhclient
configuration to point to the cluster dns server (either dnsmasq or kubedns, depending on dns_mode).
configuration to point to the cluster dns server (either coredns or coredns_dual, depending on dns_mode).
As cluster DNS is not available on early deployment stage, this mode is split into 2 stages. In the first
stage (``dns_early: true``), ``/etc/resolv.conf`` is configured to use the DNS servers found in ``upstream_dns_servers``

View file

@ -15,8 +15,8 @@ For a large scaled deployments, consider the following configuration changes:
load on a delegate (the first K8s master node) then retrying failed
push or download operations.
* Tune parameters for DNS related applications (dnsmasq daemon set, kubedns
replication controller). Those are ``dns_replicas``, ``dns_cpu_limit``,
* Tune parameters for DNS related applications
Those are ``dns_replicas``, ``dns_cpu_limit``,
``dns_cpu_requests``, ``dns_memory_limit``, ``dns_memory_requests``.
Please note that limits must always be greater than or equal to requests.

View file

@ -59,8 +59,6 @@ following default cluster parameters:
overlap with kube_service_addresses.
* *kube_network_node_prefix* - Subnet allocated per-node for pod IPs. Remainin
bits in kube_pods_subnet dictates how many kube-nodes can be in cluster.
* *dns_setup* - Enables dnsmasq
* *dnsmasq_dns_server* - Cluster IP for dnsmasq (default is 10.233.0.2)
* *skydns_server* - Cluster IP for DNS (default is 10.233.0.3)
* *skydns_server_secondary* - Secondary Cluster IP for CoreDNS used with coredns_dual deployment (default is 10.233.0.4)
* *cloud_provider* - Enable extra Kubelet option if operating inside GCE or
@ -84,15 +82,14 @@ and ``kube_pods_subnet``, for example from the ``172.18.0.0/16``.
#### DNS variables
By default, dnsmasq gets set up with 8.8.8.8 as an upstream DNS server and all
By default, hosts are set up with 8.8.8.8 as an upstream DNS server and all
other settings from your existing /etc/resolv.conf are lost. Set the following
variables to match your requirements.
* *upstream_dns_servers* - Array of upstream DNS servers configured on host in
addition to Kubespray deployed DNS
* *nameservers* - Array of DNS servers configured for use in dnsmasq
* *nameservers* - Array of DNS servers configured for use by hosts
* *searchdomains* - Array of up to 4 search domains
* *skip_dnsmasq* - Don't set up dnsmasq (use only KubeDNS)
For more information, see [DNS
Stack](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md).

View file

@ -35,7 +35,7 @@ nginx_kube_apiserver_healthcheck_port: 8081
## modules.
# kubelet_load_modules: false
## Upstream dns servers used by dnsmasq
## Upstream dns servers
# upstream_dns_servers:
# - 8.8.8.8
# - 8.8.4.4

View file

@ -127,7 +127,7 @@ kube_encrypt_secret_data: false
cluster_name: cluster.local
# Subdomains of DNS domain to be resolved via /etc/resolv.conf for hostnet pods
ndots: 2
# Can be dnsmasq_kubedns, kubedns, coredns, coredns_dual, manual or none
# Can be coredns, coredns_dual, manual or none
dns_mode: coredns
# Set manual server if using a custom cluster DNS server
# manual_dns_server: 10.x.x.x
@ -142,7 +142,6 @@ deploy_netchecker: false
# Ip address of the kubernetes skydns service
skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}"
skydns_server_secondary: "{{ kube_service_addresses|ipaddr('net')|ipaddr(4)|ipaddr('address') }}"
dnsmasq_dns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(2)|ipaddr('address') }}"
dns_domain: "{{ cluster_name }}"
## Container runtime
@ -176,10 +175,6 @@ podsecuritypolicy_enabled: false
# Download kubectl onto the host that runs Ansible in {{ bin_dir }}
# kubectl_localhost: false
# dnsmasq
# dnsmasq_upstream_dns_servers:
# - /resolvethiszone.with/10.0.4.250
# - 8.8.8.8
# Enable creation of QoS cgroup hierarchy, if true top level QoS and pod cgroups are created. (default true)
# kubelet_cgroups_per_qos: true

View file

@ -19,10 +19,10 @@
- attempts:2
- name: add upstream dns servers (only when dnsmasq is not used)
- name: add upstream dns servers
set_fact:
docker_dns_servers: "{{ docker_dns_servers + upstream_dns_servers|default([]) }}"
when: dns_mode in ['kubedns', 'coredns', 'coredns_dual']
when: dns_mode in ['coredns', 'coredns_dual']
- name: add global searchdomains
set_fact:

View file

@ -1,75 +0,0 @@
---
# Existing search/nameserver resolvconf entries will be purged and
# ensured by this additional data:
# Max of 4 names is allowed and no more than 256 - 17 chars total
# (a 2 is reserved for the 'default.svc.' and'svc.')
# searchdomains:
# - foo.bar.lc
# Max of 2 is allowed here (a 1 is reserved for the dns_server)
# nameservers:
# - 127.0.0.1
# Versions
dnsmasq_version: 2.72
# Images
dnsmasq_image_repo: "andyshinn/dnsmasq"
dnsmasq_image_tag: "{{ dnsmasq_version }}"
# Limits for dnsmasq/kubedns apps
dns_cpu_limit: 100m
dns_memory_limit: 170Mi
dns_cpu_requests: 40m
dns_memory_requests: 50Mi
# Autoscaler parameters
dnsmasq_nodes_per_replica: 10
dnsmasq_min_replicas: 1
# Custom name servers
dnsmasq_upstream_dns_servers: []
# Try each query with each server strictly in the order
dnsmasq_enable_strict_order: true
# Send queries to all servers
dnsmasq_enable_all_servers: false
# Maximum number of concurrent DNS queries.
dns_forward_max: 150
# Caching params
cache_size: 1000
dnsmasq_max_cache_ttl: 10
dnsmasq_enable_no_negcache: true
# Maximum TTL value that will be handed out to clients.
# The specified maximum TTL will be given to clients
# instead of the true TTL value if it is lower.
dnsmasq_max_ttl: 20
# If enabled - don't read /etc/resolv.conf.
dnsmasq_enable_no_resolv: true
# Bogus private reverse lookups.
# All reverse lookups for private IP ranges (ie 192.168.x.x, etc)
# which are not found in /etc/hosts or the DHCP leases file are
# answered with "no such domain" rather than being forwarded upstream.
# The set of prefixes affected is the list given in RFC6303, for IPv4 and IPv6.
dnsmasq_enable_bogus_priv: true
# This option forces dnsmasq to really bind only the interfaces it is listening on
dnsmasq_enable_bind_interfaces: true
dnsmasq_listen_address: "0.0.0.0"
# Additional hosts file or directory
dnsmasq_addn_hosts: /etc/hosts
# Facility to which dnsmasq will send syslog entries.
# If the facility is '-' then dnsmasq logs to stderr.
dnsmasq_log_facility: "-"
# Additional startup parameters
dnsmasq_additional_startup_parameters: []

View file

@ -1,102 +0,0 @@
---
- name: ensure dnsmasq.d directory exists
file:
path: /etc/dnsmasq.d
state: directory
- name: ensure dnsmasq.d-available directory exists
file:
path: /etc/dnsmasq.d-available
state: directory
- name: check system nameservers
shell: awk '/^nameserver/ {print $NF}' /etc/resolv.conf
changed_when: False
register: system_nameservers
- name: init system_and_upstream_dns_servers
set_fact:
system_and_upstream_dns_servers: "{{ upstream_dns_servers|default([]) }}"
- name: combine upstream_dns_servers and system nameservers (only for docker_dns)
set_fact:
system_and_upstream_dns_servers: "{{ system_and_upstream_dns_servers | union(system_nameservers.stdout_lines) | unique }}"
when: system_nameservers.stdout != "" and resolvconf_mode != 'host_resolvconf'
- name: Write dnsmasq configuration
template:
src: 01-kube-dns.conf.j2
dest: /etc/dnsmasq.d-available/01-kube-dns.conf
mode: 0755
backup: yes
register: dnsmasq_config
- name: Stat dnsmasq link
stat:
path: /etc/dnsmasq.d-available/01-kube-dns.conf
register: dnsmasq_stat
- name: Stat dnsmasq link
stat:
path: /etc/dnsmasq.d/01-kube-dns.conf
register: sym
- name: Move previous configuration
command: mv /etc/dnsmasq.d/01-kube-dns.conf /etc/dnsmasq.d-available/01-kube-dns.conf.bak
changed_when: False
when: sym.stat.islnk is defined and sym.stat.islnk == False
- name: Enable dnsmasq configuration
file:
src: /etc/dnsmasq.d-available/01-kube-dns.conf
dest: /etc/dnsmasq.d/01-kube-dns.conf
state: link
- name: Create dnsmasq RBAC manifests
template:
src: "{{ item }}.j2"
dest: "{{ kube_config_dir }}/{{ item }}"
with_items:
- "dnsmasq-clusterrolebinding.yml"
- "dnsmasq-serviceaccount.yml"
delegate_to: "{{ groups['kube-master'][0] }}"
run_once: true
- name: Apply dnsmasq RBAC manifests
command: "{{ bin_dir }}/kubectl apply -f {{ kube_config_dir }}/{{ item }}"
with_items:
- "dnsmasq-clusterrolebinding.yml"
- "dnsmasq-serviceaccount.yml"
delegate_to: "{{ groups['kube-master'][0] }}"
run_once: true
- name: Create dnsmasq manifests
template:
src: "{{item.file}}.j2"
dest: "{{kube_config_dir}}/{{item.file}}"
with_items:
- {name: dnsmasq, file: dnsmasq-deploy.yml, type: deployment}
- {name: dnsmasq, file: dnsmasq-svc.yml, type: svc}
- {name: dnsmasq-autoscaler, file: dnsmasq-autoscaler.yml, type: deployment}
register: manifests
delegate_to: "{{ groups['kube-master'][0] }}"
run_once: true
- name: Start Resources
kube:
name: "{{item.item.name}}"
namespace: "kube-system"
kubectl: "{{bin_dir}}/kubectl"
resource: "{{item.item.type}}"
filename: "{{kube_config_dir}}/{{item.item.file}}"
state: "latest"
with_items: "{{ manifests.results }}"
delegate_to: "{{ groups['kube-master'][0] }}"
run_once: true
- name: Check for dnsmasq port (pulling image and running container)
wait_for:
host: "{{dnsmasq_dns_server}}"
port: 53
timeout: 180
when: inventory_hostname == groups['kube-node'][0] and groups['kube-node'][0] in ansible_play_hosts

View file

@ -1,66 +0,0 @@
#Listen on localhost
{% if dnsmasq_enable_bind_interfaces %}
bind-interfaces
{% endif %}
{% if dnsmasq_listen_address|length > 0 %}
listen-address={{ dnsmasq_listen_address }}
{% endif %}
{% if dnsmasq_addn_hosts|length > 0 %}
addn-hosts={{ dnsmasq_addn_hosts }}
{% endif %}
{% if dnsmasq_enable_strict_order %}
strict-order
{% endif %}
{% if dnsmasq_enable_all_servers %}
all-servers
{% endif %}
# Forward k8s domain to kube-dns
server=/{{ dns_domain }}/{{ skydns_server }}
# Reply NXDOMAIN to bogus domains requests like com.cluster.local.cluster.local
local=/{{ bogus_domains }}
#Set upstream dns servers
{% if dnsmasq_upstream_dns_servers|length > 0 %}
{% for srv in dnsmasq_upstream_dns_servers %}
server={{ srv }}
{% endfor %}
{% endif %}
{% if system_and_upstream_dns_servers|length > 0 %}
{% for srv in system_and_upstream_dns_servers %}
server={{ srv }}
{% endfor %}
{% elif resolvconf_mode == 'host_resolvconf' %}
{# The default resolver is only needed when the hosts resolv.conf was modified by us. If it was not modified, we can rely on dnsmasq to reuse the systems resolv.conf #}
server={{ cloud_resolver }}
{% endif %}
{% if kube_log_level == '4' %}
log-queries
{% endif %}
{% if dnsmasq_enable_no_resolv %}
no-resolv
{% endif %}
{% if dnsmasq_enable_bogus_priv %}
bogus-priv
{% endif %}
{% if dnsmasq_enable_no_negcache %}
no-negcache
{% endif %}
cache-size={{ cache_size }}
dns-forward-max={{ dns_forward_max }}
max-cache-ttl={{ dnsmasq_max_cache_ttl }}
max-ttl={{ dnsmasq_max_ttl }}
log-facility={{ dnsmasq_log_facility }}
{% for dnsmasq_additional_startup_parameter in dnsmasq_additional_startup_parameters %}
{{ dnsmasq_additional_startup_parameter }}
{% endfor %}

View file

@ -1,58 +0,0 @@
---
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dnsmasq-autoscaler
namespace: kube-system
labels:
k8s-app: dnsmasq-autoscaler
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
template:
metadata:
labels:
k8s-app: dnsmasq-autoscaler
annotations:
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
{% if kube_version is version('v1.11.1', '>=') %}
priorityClassName: system-cluster-critical
{% endif %}
serviceAccountName: dnsmasq
tolerations:
- effect: NoSchedule
operator: Exists
containers:
- name: autoscaler
image: "{{ dnsmasqautoscaler_image_repo }}:{{ dnsmasqautoscaler_image_tag }}"
resources:
requests:
cpu: "20m"
memory: "10Mi"
command:
- /cluster-proportional-autoscaler
- --namespace=kube-system
- --configmap=dnsmasq-autoscaler
- --target=Deployment/dnsmasq
# When cluster is using large nodes(with more cores), "coresPerReplica" should dominate.
# If using small nodes, "nodesPerReplica" should dominate.
- --default-params={"linear":{"nodesPerReplica":{{ dnsmasq_nodes_per_replica }},"preventSinglePointFailure":true}}
- --logtostderr=true
- --v={{ kube_log_level }}
nodeSelector:
beta.kubernetes.io/os: linux

View file

@ -1,14 +0,0 @@
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: dnsmasq
namespace: "kube-system"
subjects:
- kind: ServiceAccount
name: dnsmasq
namespace: "kube-system"
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io

View file

@ -1,72 +0,0 @@
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dnsmasq
namespace: "kube-system"
labels:
k8s-app: dnsmasq
kubernetes.io/cluster-service: "true"
spec:
replicas: {{ dnsmasq_min_replicas }}
selector:
matchLabels:
k8s-app: dnsmasq
strategy:
type: "Recreate"
template:
metadata:
labels:
k8s-app: dnsmasq
kubernetes.io/cluster-service: "true"
kubespray/dnsmasq-checksum: "{{ dnsmasq_stat.stat.checksum }}"
spec:
{% if kube_version is version('v1.11.1', '>=') %}
priorityClassName: system-cluster-critical
{% endif %}
tolerations:
- effect: NoSchedule
operator: Exists
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- name: dnsmasq
image: "{{ dnsmasq_image_repo }}:{{ dnsmasq_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
command:
- dnsmasq
args:
- -k
- -C
- /etc/dnsmasq.d/01-kube-dns.conf
securityContext:
capabilities:
add:
- NET_ADMIN
resources:
limits:
cpu: {{ dns_cpu_limit }}
memory: {{ dns_memory_limit }}
requests:
cpu: {{ dns_cpu_requests }}
memory: {{ dns_memory_requests }}
ports:
- name: dns
containerPort: 53
protocol: UDP
- name: dns-tcp
containerPort: 53
protocol: TCP
volumeMounts:
- name: etcdnsmasqd
mountPath: /etc/dnsmasq.d
- name: etcdnsmasqdavailable
mountPath: /etc/dnsmasq.d-available
volumes:
- name: etcdnsmasqd
hostPath:
path: /etc/dnsmasq.d
- name: etcdnsmasqdavailable
hostPath:
path: /etc/dnsmasq.d-available
dnsPolicy: Default # Don't use cluster DNS.

View file

@ -1,8 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: dnsmasq
namespace: "kube-system"
labels:
kubernetes.io/cluster-service: "true"

View file

@ -1,23 +0,0 @@
---
apiVersion: v1
kind: Service
metadata:
labels:
kubernetes.io/cluster-service: 'true'
k8s-app: dnsmasq
name: dnsmasq
namespace: kube-system
spec:
ports:
- port: 53
name: dns-tcp
targetPort: 53
protocol: TCP
- port: 53
name: dns
targetPort: 53
protocol: UDP
type: ClusterIP
clusterIP: {{dnsmasq_dns_server}}
selector:
k8s-app: dnsmasq

View file

@ -201,12 +201,6 @@ multus_image_repo: "docker.io/nfvpe/multus"
multus_image_tag: "{{ multus_version }}"
nginx_image_repo: nginx
nginx_image_tag: 1.13
dnsmasq_version: 2.78
dnsmasq_image_repo: "andyshinn/dnsmasq"
dnsmasq_image_tag: "{{ dnsmasq_version }}"
kubedns_version: 1.14.13
kubedns_image_repo: "gcr.io/google_containers/k8s-dns-kube-dns-{{ image_arch }}"
kubedns_image_tag: "{{ kubedns_version }}"
coredns_version: "1.2.6"
coredns_image_repo: "coredns/coredns"
@ -216,13 +210,6 @@ nodelocaldns_version: "1.15.1"
nodelocaldns_image_repo: "k8s.gcr.io/k8s-dns-node-cache"
nodelocaldns_image_tag: "{{ nodelocaldns_version }}"
dnsmasq_nanny_image_repo: "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-{{ image_arch }}"
dnsmasq_nanny_image_tag: "{{ kubedns_version }}"
dnsmasq_sidecar_image_repo: "gcr.io/google_containers/k8s-dns-sidecar-{{ image_arch }}"
dnsmasq_sidecar_image_tag: "{{ kubedns_version }}"
dnsmasqautoscaler_version: 1.1.2
dnsmasqautoscaler_image_repo: "gcr.io/google_containers/cluster-proportional-autoscaler-{{ image_arch }}"
dnsmasqautoscaler_image_tag: "{{ dnsmasqautoscaler_version }}"
dnsautoscaler_version: 1.3.0
dnsautoscaler_image_repo: "gcr.io/google_containers/cluster-proportional-autoscaler-{{ image_arch }}"
dnsautoscaler_image_tag: "{{ dnsautoscaler_version }}"
@ -506,24 +493,6 @@ downloads:
groups:
- kube-node
dnsmasq:
enabled: "{{ dns_mode == 'dnsmasq_kubedns' }}"
container: true
repo: "{{ dnsmasq_image_repo }}"
tag: "{{ dnsmasq_image_tag }}"
sha256: "{{ dnsmasq_digest_checksum|default(None) }}"
groups:
- kube-node
kubedns:
enabled: "{{ dns_mode in ['kubedns', 'dnsmasq_kubedns'] }}"
container: true
repo: "{{ kubedns_image_repo }}"
tag: "{{ kubedns_image_tag }}"
sha256: "{{ kubedns_digest_checksum|default(None) }}"
groups:
- kube-node
coredns:
enabled: "{{ dns_mode in ['coredns', 'coredns_dual'] }}"
container: true
@ -542,26 +511,8 @@ downloads:
groups:
- kube-node
dnsmasq_nanny:
enabled: "{{ dns_mode in ['kubedns', 'dnsmasq_kubedns'] }}"
container: true
repo: "{{ dnsmasq_nanny_image_repo }}"
tag: "{{ dnsmasq_nanny_image_tag }}"
sha256: "{{ dnsmasq_nanny_digest_checksum|default(None) }}"
groups:
- kube-node
dnsmasq_sidecar:
enabled: "{{ dns_mode in ['kubedns', 'dnsmasq_kubedns'] }}"
container: true
repo: "{{ dnsmasq_sidecar_image_repo }}"
tag: "{{ dnsmasq_sidecar_image_tag }}"
sha256: "{{ dnsmasq_sidecar_digest_checksum|default(None) }}"
groups:
- kube-node
dnsautoscaler:
enabled: "{{ dns_mode in ['kubedns', 'dnsmasq_kubedns','coredns', 'coredns_dual'] }}"
enabled: "{{ dns_mode in ['coredns', 'coredns_dual'] }}"
container: true
repo: "{{ dnsautoscaler_image_repo }}"
tag: "{{ dnsautoscaler_image_tag }}"

View file

@ -1,5 +1,5 @@
---
# Limits for dnsmasq/kubedns apps
# Limits for coredns
dns_memory_limit: 170Mi
dns_cpu_requests: 100m
dns_memory_requests: 70Mi

View file

@ -1,44 +0,0 @@
---
- name: Kubernetes Apps | Lay Down KubeDNS Template
action: "{{ item.module }}"
args:
src: "{{ item.file }}{% if item.module == 'template' %}.j2{% endif %}"
dest: "{{ kube_config_dir }}/{{ item.file }}"
with_items:
- { name: kube-dns, module: template, file: kubedns-sa.yml, type: sa }
- { name: kube-dns, module: template, file: kubedns-config.yml, type: configmap }
- { name: kube-dns, module: template, file: kubedns-deploy.yml, type: deployment }
- { name: kube-dns, module: template, file: kubedns-svc.yml, type: svc }
- { name: dns-autoscaler, module: copy, file: dns-autoscaler-sa.yml, type: sa }
- { name: dns-autoscaler, module: copy, file: dns-autoscaler-clusterrole.yml, type: clusterrole }
- { name: dns-autoscaler, module: copy, file: dns-autoscaler-clusterrolebinding.yml, type: clusterrolebinding }
- { name: dns-autoscaler, module: template, file: dns-autoscaler.yml, type: deployment }
register: kubedns_manifests
when:
- dns_mode in ['kubedns','dnsmasq_kubedns']
- inventory_hostname == groups['kube-master'][0]
tags:
- dnsmasq
- kubedns
# see https://github.com/kubernetes/kubernetes/issues/45084, only needed for "old" kube-dns
- name: Kubernetes Apps | Patch system:kube-dns ClusterRole
command: >
{{ bin_dir }}/kubectl patch clusterrole system:kube-dns
--patch='{
"rules": [
{
"apiGroups" : [""],
"resources" : ["endpoints", "services"],
"verbs": ["list", "watch", "get"]
}
]
}'
when:
- dns_mode in ['kubedns', 'dnsmasq_kubedns']
- inventory_hostname == groups['kube-master'][0]
- rbac_enabled and kubedns_version is version("1.11.0", "<", strict=True)
tags:
- dnsmasq
- kubedns

View file

@ -17,9 +17,7 @@
- inventory_hostname == groups['kube-master'][0]
tags:
- upgrade
- dnsmasq
- coredns
- kubedns
- nodelocaldns
- name: Kubernetes Apps | CoreDNS
@ -38,14 +36,6 @@
tags:
- nodelocaldns
- name: Kubernetes Apps | KubeDNS
import_tasks: "tasks/kubedns.yml"
when:
- dns_mode in ['kubedns', 'dnsmasq_kubedns']
- inventory_hostname == groups['kube-master'][0]
tags:
- dnsmasq
- name: Kubernetes Apps | Start Resources
kube:
name: "{{ item.item.name }}"
@ -55,7 +45,6 @@
filename: "{{ kube_config_dir }}/{{ item.item.file }}"
state: "latest"
with_items:
- "{{ kubedns_manifests.results | default({}) }}"
- "{{ coredns_manifests.results | default({}) }}"
- "{{ coredns_secondary_manifests.results | default({}) }}"
- "{{ nodelocaldns_manifests.results | default({}) }}"
@ -68,9 +57,7 @@
retries: 4
delay: 5
tags:
- dnsmasq
- coredns
- kubedns
- nodelocaldns
loop_control:
label: "{{ item.item.file }}"

View file

@ -2,10 +2,8 @@
- name: Kubernetes Apps | set up necessary nodelocaldns parameters
set_fact:
clusterIP: >-
{%- if dns_mode in ['kubedns', 'coredns', 'coredns_dual'] -%}
{%- if dns_mode in ['coredns', 'coredns_dual'] -%}
{{ skydns_server }}
{%- elif dns_mode == 'dnsmasq_kubedns' -%}
{{ dnsmasq_dns_server }}
{%- elif dns_mode == 'manual' -%}
{{ manual_dns_server }}
{%- endif -%}

View file

@ -72,12 +72,7 @@ spec:
- --logtostderr=true
- --v=2
- --configmap=dns-autoscaler{{ coredns_ordinal_suffix }}
{% if dns_mode in ['coredns', 'coredns_dual'] %}
- --target=Deployment/coredns{{ coredns_ordinal_suffix }}
{% endif %}
{% if dns_mode in ['kubedns', 'dnsmasq_kubedns'] %}
- --target=Deployment/kube-dns
{% endif %}
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"

View file

@ -1,8 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists

View file

@ -1,184 +0,0 @@
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
{% if kube_version is version('v1.11.1', '>=') %}
priorityClassName: system-cluster-critical
{% endif %}
nodeSelector:
beta.kubernetes.io/os: linux
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
- effect: "NoSchedule"
operator: "Equal"
key: "node-role.kubernetes.io/master"
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: "kubernetes.io/hostname"
labelSelector:
matchLabels:
k8s-app: kube-dns
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: node-role.kubernetes.io/master
operator: In
values:
- ""
volumes:
- name: kube-dns-config
configMap:
name: kube-dns
optional: true
containers:
- name: kubedns
image: "{{ kubedns_image_repo }}:{{ kubedns_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
memory: {{ dns_memory_limit }}
requests:
cpu: {{ dns_cpu_requests }}
memory: {{ dns_memory_requests }}
livenessProbe:
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain={{ dns_domain }}.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v={{ kube_log_level }}
{% if resolvconf_mode == 'host_resolvconf' and upstream_dns_servers is defined and upstream_dns_servers|length > 0 %}
- --nameservers={{ upstream_dns_servers|join(',') }}
{% endif %}
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
volumeMounts:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: "{{ dnsmasq_nanny_image_repo }}:{{ dnsmasq_nanny_image_tag }}"
imagePullPolicy: {{ k8s_image_pull_policy }}
livenessProbe:
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- -v={{ kube_log_level }}
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --no-negcache
- --dns-loop-detect
- --log-facility=-
- --server=/{{ dns_domain }}/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
# see: https://github.com/kubernetes/kubernetes/issues/29055 for details
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
image: "{{ dnsmasq_sidecar_image_repo }}:{{ dnsmasq_sidecar_image_tag }}"
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v={{ kube_log_level }}
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.{{ dns_domain }},5,SRV
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.{{ dns_domain }},5,SRV
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 20Mi
cpu: 10m
dnsPolicy: Default # Don't use cluster DNS.
serviceAccountName: kube-dns

View file

@ -1,9 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-dns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile

View file

@ -1,25 +0,0 @@
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: {{ skydns_server }}
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 10055
protocol: TCP

View file

@ -34,7 +34,7 @@
{{ bin_dir }}/kubectl --kubeconfig /etc/kubernetes/admin.conf get secrets --all-namespaces
-o 'jsonpath={range .items[*]}{"\n"}{.metadata.namespace}{" "}{.metadata.name}{" "}{.type}{end}'
| grep kubernetes.io/service-account-token
| egrep 'default-token|kube-proxy|kube-dns|dnsmasq|netchecker|weave|calico|canal|flannel|dashboard|cluster-proportional-autoscaler|tiller|local-volume-provisioner'
| egrep 'default-token|kube-proxy|kube-dns|netchecker|weave|calico|canal|flannel|dashboard|cluster-proportional-autoscaler|tiller|local-volume-provisioner'
register: tokens_to_delete
when: needs_rotation

View file

@ -65,12 +65,10 @@ KUBELET_HOSTNAME="--hostname-override={{ kube_override_hostname }}"
{% endif %}
{# DNS settings for kubelet #}
{% if dns_mode in ['kubedns', 'coredns'] %}
{% if dns_mode == 'coredns' %}
{% set kubelet_args_cluster_dns %}--cluster-dns={{ skydns_server }}{% endset %}
{% elif dns_mode == 'coredns_dual' %}
{% set kubelet_args_cluster_dns %}--cluster-dns={{ skydns_server }},{{ skydns_server_secondary }}{% endset %}
{% elif dns_mode == 'dnsmasq_kubedns' %}
{% set kubelet_args_cluster_dns %}--cluster-dns={{ dnsmasq_dns_server }}{% endset %}
{% elif dns_mode == 'manual' %}
{% set kubelet_args_cluster_dns %}--cluster-dns={{ manual_dns_server }}{% endset %}
{% else %}

View file

@ -175,8 +175,8 @@
- name: Stop if unknown dns mode
assert:
that: dns_mode in ['dnsmasq_kubedns', 'kubedns', 'coredns', 'coredns_dual', 'manual', 'none']
msg: "dns_mode can only be 'dnsmasq_kubedns', 'kubedns', 'coredns', 'coredns_dual', 'manual' or 'none'"
that: dns_mode in ['coredns', 'coredns_dual', 'manual', 'none']
msg: "dns_mode can only be 'coredns', 'coredns_dual', 'manual' or 'none'"
when: dns_mode is defined
run_once: true

View file

@ -123,10 +123,10 @@
supersede_domain:
supersede domain-name "{{ dns_domain }}";
- name: pick dnsmasq cluster IP or default resolver
- name: pick coredns cluster IP or default resolver
set_fact:
dnsmasq_server: |-
{%- if dns_mode in ['kubedns', 'coredns'] and not dns_early|bool -%}
coredns_server: |-
{%- if dns_mode == 'coredns' and not dns_early|bool -%}
{{ [ skydns_server ] + upstream_dns_servers|default([]) }}
{%- elif dns_mode == 'coredns_dual' and not dns_early|bool -%}
{{ [ skydns_server ] + [ skydns_server_secondary ] + upstream_dns_servers|default([]) }}
@ -134,16 +134,14 @@
{{ ( manual_dns_server.split(',') | list) + upstream_dns_servers|default([]) }}
{%- elif dns_early|bool -%}
{{ upstream_dns_servers|default([]) }}
{%- else -%}
{{ [ dnsmasq_dns_server ] }}
{%- endif -%}
- name: generate nameservers to resolvconf
set_fact:
nameserverentries:
nameserver {{( dnsmasq_server + nameservers|d([]) + cloud_resolver|d([])) | join(',nameserver ')}}
nameserver {{( coredns_server + nameservers|d([]) + cloud_resolver|d([])) | join(',nameserver ')}}
supersede_nameserver:
supersede domain-name-servers {{( dnsmasq_server + nameservers|d([]) + cloud_resolver|d([])) | join(', ') }};
supersede domain-name-servers {{( coredns_server + nameservers|d([]) + cloud_resolver|d([])) | join(', ') }};
- name: gather os specific variables
include_vars: "{{ item }}"

View file

@ -55,7 +55,7 @@ epel_enabled: false
cluster_name: cluster.local
# Subdomains of DNS domain to be resolved via /etc/resolv.conf for hostnet pods
ndots: 2
# Can be dnsmasq_kubedns, kubedns, manual or none
# Can be coredns, coredns_dual, manual, or none
dns_mode: coredns
# Enable nodelocal dns cache
@ -69,20 +69,19 @@ manual_dns_server: ""
resolvconf_mode: docker_dns
# Deploy netchecker app to verify DNS resolve as an HTTP service
deploy_netchecker: false
# Ip address of the kubernetes skydns service
# Ip address of the kubernetes DNS service (called skydns for historical reasons)
skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}"
skydns_server_secondary: "{{ kube_service_addresses|ipaddr('net')|ipaddr(4)|ipaddr('address') }}"
dnsmasq_dns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(2)|ipaddr('address') }}"
dns_domain: "{{ cluster_name }}"
kube_dns_servers:
kubedns: ["{{skydns_server}}"]
coredns: ["{{skydns_server}}"]
coredns_dual: "{{[skydns_server] + [ skydns_server_secondary ]}}"
manual: ["{{manual_dns_server}}"]
dnsmasq_kubedns: ["{{dnsmasq_dns_server}}"]
dns_servers: "{{kube_dns_servers[dns_mode]}}"
# Kubernetes configuration dirs and system namespace.
# Those are where all the additional config stuff goes
# the kubernetes normally puts in /srv/kubernetes.

View file

@ -8,4 +8,4 @@
user: kube
password: "{{ lookup('password', credentials_dir + '/kube_user.creds length=15 chars=ascii_letters,digits') }}"
validate_certs: no
status_code: 200,401
status_code: 200,401,403

View file

@ -114,15 +114,9 @@
- { role: kubespray-defaults}
- { role: network_plugin/calico/rr, tags: network }
- hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: dnsmasq, when: "dns_mode == 'dnsmasq_kubedns'", tags: dnsmasq }
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf }
- hosts: kube-master
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes-apps, tags: apps }
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf }