Allow subdomains of dns_domain and fix kubelet restarts

* Add a var for ndots (default 5) and put it hosts' /etc/resolv.conf.
* Poke kube dns container image to v1.7
* In order to apply changes to kubelet, notify it to
be restarted on changes made to /etc/resolv.conf. Ignore errors as the kubelet
may yet to be present up to the moment of the notification being processed.
* Remove unnecessary kubelet restart for master role as the node role ensures
it is up and running. Notify master static pods waiters for apiserver,
scheduler, controller-manager instead.

Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
This commit is contained in:
Bogdan Dobrelya 2016-09-27 11:54:12 +02:00
parent 0f461282c8
commit 5fd43b7cf0
7 changed files with 63 additions and 36 deletions

View file

@ -7,7 +7,7 @@ to serve as an authoritative DNS server for a given ``dns_domain`` and its
``svc, default.svc`` default subdomains (a total of ``ndots: 5`` max levels). ``svc, default.svc`` default subdomains (a total of ``ndots: 5`` max levels).
Note, additional search (sub)domains may be defined in the ``searchdomains`` Note, additional search (sub)domains may be defined in the ``searchdomains``
var. And additional recursive DNS resolvers in the `` upstream_dns_servers``, and ``ndots`` vars. And additional recursive DNS resolvers in the `` upstream_dns_servers``,
``nameservers`` vars. Intranet DNS resolvers should be specified in the first ``nameservers`` vars. Intranet DNS resolvers should be specified in the first
place, followed by external resolvers, for example: place, followed by external resolvers, for example:
@ -21,17 +21,10 @@ or
skip_dnsmasq: false skip_dnsmasq: false
upstream_dns_servers: [172.18.32.6, 172.18.32.7, 8.8.8.8, 8.8.8.4] upstream_dns_servers: [172.18.32.6, 172.18.32.7, 8.8.8.8, 8.8.8.4]
``` ```
The vars are explained below as well.
Remember the limitations (the vars are explained below): DNS configuration details
-------------------------
* the ``searchdomains`` have a limitation of a 6 names and 256 chars
length. Due to default ``svc, default.svc`` subdomains, the actual
limits are a 4 names and 239 chars respectively.
* the ``nameservers`` have a limitation of a 3 servers, although there
is a way to mitigate that with the ``upstream_dns_servers``,
see below. Anyway, the ``nameservers`` can take no more than a two
custom DNS servers because of one slot is reserved for a Kubernetes
cluster needs.
Here is an approximate picture of how DNS things working and Here is an approximate picture of how DNS things working and
being configured by Kargo ansible playbooks: being configured by Kargo ansible playbooks:
@ -73,7 +66,27 @@ Those may be specified either in ``nameservers`` or ``upstream_dns_servers``
and will be merged together with the ``skydns_server`` IP into the hots' and will be merged together with the ``skydns_server`` IP into the hots'
``/etc/resolv.conf``. ``/etc/resolv.conf``.
Kargo has yet ways to configure Kubedns addon to forward requests SkyDns can Limitations
not answer with authority to arbitrary recursive resolvers. This task is left -----------
for future. See [official SkyDns docs](https://github.com/skynetservices/skydns)
for details. * Kargo has yet ways to configure Kubedns addon to forward requests SkyDns can
not answer with authority to arbitrary recursive resolvers. This task is left
for future. See [official SkyDns docs](https://github.com/skynetservices/skydns)
for details.
* There is
[no way to specify a custom value](https://github.com/kubernetes/kubernetes/issues/33554)
for the SkyDNS ``ndots`` param via an
[option for KubeDNS](https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-dns/app/options/options.go)
add-on, while SkyDNS supports it though. Thus, DNS SRV records may not work
as expected as they require the ``ndots:7``.
* the ``searchdomains`` have a limitation of a 6 names and 256 chars
length. Due to default ``svc, default.svc`` subdomains, the actual
limits are a 4 names and 239 chars respectively.
* the ``nameservers`` have a limitation of a 3 servers, although there
is a way to mitigate that with the ``upstream_dns_servers``,
see below. Anyway, the ``nameservers`` can take no more than a two
custom DNS servers because of one slot is reserved for a Kubernetes
cluster needs.

View file

@ -33,6 +33,8 @@ kube_users:
# Kubernetes cluster name, also will be used as DNS domain # Kubernetes cluster name, also will be used as DNS domain
cluster_name: cluster.local cluster_name: cluster.local
# Subdomains of DNS domain to be resolved via /etc/resolv.conf
ndots: 5
# For some environments, each node has a pubilcally accessible # For some environments, each node has a pubilcally accessible
# address and an address it should bind services to. These are # address and an address it should bind services to. These are

View file

@ -17,5 +17,18 @@
when: ansible_os_family != "RedHat" and ansible_os_family != "CoreOS" when: ansible_os_family != "RedHat" and ansible_os_family != "CoreOS"
- name: Dnsmasq | update resolvconf - name: Dnsmasq | update resolvconf
command: /bin/true
notify:
- Dnsmasq | reload resolvconf
- Dnsmasq | reload kubelet
- name: Dnsmasq | reload resolvconf
command: /sbin/resolvconf -u command: /sbin/resolvconf -u
ignore_errors: true ignore_errors: true
- name: Dnsmasq | reload kubelet
service:
name: kubelet
state: restarted
when: "{{ inventory_hostname in groups['kube-master'] }}"
ignore_errors: true

View file

@ -72,6 +72,7 @@
backup: yes backup: yes
follow: yes follow: yes
with_items: with_items:
- ndots:{{ ndots }}
- timeout:2 - timeout:2
- attempts:2 - attempts:2
notify: Dnsmasq | update resolvconf notify: Dnsmasq | update resolvconf

View file

@ -21,7 +21,7 @@ spec:
spec: spec:
containers: containers:
- name: kubedns - name: kubedns
image: gcr.io/google_containers/kubedns-amd64:1.6 image: gcr.io/google_containers/kubedns-amd64:1.7
resources: resources:
# TODO: Set memory limits when we've profiled the container for large # TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in # clusters, then set request = limit to keep this container in

View file

@ -4,12 +4,14 @@
notify: notify:
- Master | reload systemd - Master | reload systemd
- Master | reload kubelet - Master | reload kubelet
- Master | wait for master static pods
- name: wait for master static pods - name: Master | wait for master static pods
command: /bin/true command: /bin/true
notify: notify:
- wait for kube-scheduler - Master | wait for the apiserver to be running
- wait for kube-controller-manager - Master | wait for kube-scheduler
- Master | wait for kube-controller-manager
- name: Master | reload systemd - name: Master | reload systemd
command: systemctl daemon-reload command: systemctl daemon-reload
@ -20,16 +22,23 @@
name: kubelet name: kubelet
state: restarted state: restarted
- name: wait for kube-scheduler - name: Master | wait for kube-scheduler
uri: url=http://localhost:10251/healthz uri: url=http://localhost:10251/healthz
register: scheduler_result register: scheduler_result
until: scheduler_result.status == 200 until: scheduler_result.status == 200
retries: 15 retries: 15
delay: 5 delay: 5
- name: wait for kube-controller-manager - name: Master | wait for kube-controller-manager
uri: url=http://localhost:10252/healthz uri: url=http://localhost:10252/healthz
register: controller_manager_result register: controller_manager_result
until: controller_manager_result.status == 200 until: controller_manager_result.status == 200
retries: 15 retries: 15
delay: 5 delay: 5
- name: Master | wait for the apiserver to be running
uri: url=http://localhost:8080/healthz
register: result
until: result.status == 200
retries: 10
delay: 6

View file

@ -19,17 +19,9 @@
template: template:
src: manifests/kube-apiserver.manifest.j2 src: manifests/kube-apiserver.manifest.j2
dest: "{{ kube_manifest_dir }}/kube-apiserver.manifest" dest: "{{ kube_manifest_dir }}/kube-apiserver.manifest"
register: apiserver_manifest notify: Master | wait for the apiserver to be running
notify: Master | restart kubelet
- name: wait for the apiserver to be running
uri: url=http://localhost:8080/healthz
register: result
until: result.status == 200
retries: 10
delay: 6
- meta: flush_handlers
# Create kube-system namespace # Create kube-system namespace
- name: copy 'kube-system' namespace manifest - name: copy 'kube-system' namespace manifest
copy: src=namespace.yml dest=/etc/kubernetes/kube-system-ns.yml copy: src=namespace.yml dest=/etc/kubernetes/kube-system-ns.yml
@ -43,7 +35,6 @@
failed_when: False failed_when: False
run_once: yes run_once: yes
- name: Create 'kube-system' namespace - name: Create 'kube-system' namespace
command: "{{ bin_dir }}/kubectl create -f /etc/kubernetes/kube-system-ns.yml" command: "{{ bin_dir }}/kubectl create -f /etc/kubernetes/kube-system-ns.yml"
changed_when: False changed_when: False
@ -54,12 +45,10 @@
template: template:
src: manifests/kube-controller-manager.manifest.j2 src: manifests/kube-controller-manager.manifest.j2
dest: "{{ kube_manifest_dir }}/kube-controller-manager.manifest" dest: "{{ kube_manifest_dir }}/kube-controller-manager.manifest"
notify: wait for kube-controller-manager notify: Master | wait for kube-controller-manager
- name: Write kube-scheduler manifest - name: Write kube-scheduler manifest
template: template:
src: manifests/kube-scheduler.manifest.j2 src: manifests/kube-scheduler.manifest.j2
dest: "{{ kube_manifest_dir }}/kube-scheduler.manifest" dest: "{{ kube_manifest_dir }}/kube-scheduler.manifest"
notify: wait for kube-scheduler notify: Master | wait for kube-scheduler
- meta: flush_handlers