Merge pull request #528 from kubespray/proxy-nginx

Use nginx proxy on non-master nodes to proxy apiserver traffic
This commit is contained in:
Smaine Kahlouch 2016-10-05 19:19:32 +02:00 committed by GitHub
commit 9df4502909
13 changed files with 131 additions and 45 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

View file

@ -33,15 +33,27 @@ Kube-apiserver
-------------- --------------
K8s components require a loadbalancer to access the apiservers via a reverse K8s components require a loadbalancer to access the apiservers via a reverse
proxy. A kube-proxy does not support multiple apiservers for the time being so proxy. Kargo includes support for an nginx-based proxy that resides on each
non-master Kubernetes node. This is referred to as localhost loadbalancing. It
is less efficient than a dedicated load balancer because it creates extra
health checks on the Kubernetes apiserver, but is more practical for scenarios
where an external LB or virtual IP management is inconvenient.
This option is configured by the variable `loadbalancer_apiserver_localhost`.
you will need to configure your own loadbalancer to achieve HA. Note that you will need to configure your own loadbalancer to achieve HA. Note that
deploying a loadbalancer is up to a user and is not covered by ansible roles deploying a loadbalancer is up to a user and is not covered by ansible roles
in Kargo. By default, it only configures a non-HA endpoint, which points to in Kargo. By default, it only configures a non-HA endpoint, which points to
the `access_ip` or IP address of the first server node in the `kube-master` the `access_ip` or IP address of the first server node in the `kube-master`
group. It can also configure clients to use endpoints for a given loadbalancer group. It can also configure clients to use endpoints for a given loadbalancer
type. type. The following diagram shows how traffic to the apiserver is directed.
A loadbalancer (LB) may be an external or internal one. An external LB ![Image](figures/loadbalancer_localhost.png?raw=true)
..note:: Kubernetes master nodes still use insecure localhost access because
there are bugs in Kubernetes <1.5.0 in using TLS auth on master role
services.
A user may opt to use an external loadbalancer (LB) instead. An external LB
provides access for external clients, while the internal LB accepts client provides access for external clients, while the internal LB accepts client
connections only to the localhost, similarly to the etcd-proxy HA endpoints. connections only to the localhost, similarly to the etcd-proxy HA endpoints.
Given a frontend `VIP` address and `IP1, IP2` addresses of backends, here is Given a frontend `VIP` address and `IP1, IP2` addresses of backends, here is
@ -71,35 +83,11 @@ into the `/etc/hosts` file of all servers in the `k8s-cluster` group. Note that
the HAProxy service should as well be HA and requires a VIP management, which the HAProxy service should as well be HA and requires a VIP management, which
is out of scope of this doc. is out of scope of this doc.
The internal LB may be the case if you do not want to operate a VIP management Specifying an external LB overrides any internal localhost LB configuration.
HA stack and require no external and no secure access to the K8s API. The group Note that for this example, the `kubernetes-apiserver-http` endpoint
var `loadbalancer_apiserver_localhost` (defaults to `false`) controls that has backends receiving unencrypted traffic, which may be a security issue
deployment layout. When enabled, it is expected each node in the `k8s-cluster` when interconnecting different nodes, or maybe not, if those belong to the
group to run a loadbalancer that listens the localhost frontend and has all isolated management network without external access.
of the apiservers as backends. Here is an example configuration for a HAProxy
service acting as an internal LB:
```
listen kubernetes-apiserver-http
bind localhost:8080
mode tcp
timeout client 3h
timeout server 3h
server master1 <IP1>:8080
server master2 <IP2>:8080
balance leastconn
```
And the corresponding example global vars config:
```
loadbalancer_apiserver_localhost: true
```
This var overrides an external LB configuration, if any. Note that for this
example, the `kubernetes-apiserver-http` endpoint has backends receiving
unencrypted traffic, which may be a security issue when interconnecting
different nodes, or may be not, if those belong to the isolated management
network without external access.
In order to achieve HA for HAProxy instances, those must be running on the In order to achieve HA for HAProxy instances, those must be running on the
each node in the `k8s-cluster` group as well, but require no VIP, thus each node in the `k8s-cluster` group as well, but require no VIP, thus
@ -109,8 +97,8 @@ Access endpoints are evaluated automagically, as the following:
| Endpoint type | kube-master | non-master | | Endpoint type | kube-master | non-master |
|------------------------------|---------------|---------------------| |------------------------------|---------------|---------------------|
| Local LB (overrides ext) | http://lc:p | http://lc:p | | Local LB | http://lc:p | http://lc:sp |
| External LB, no internal | https://lb:lp | https://lb:lp | | External LB, no internal | http://lc:p | https://lb:lp |
| No ext/int LB (default) | http://lc:p | https://m[0].aip:sp | | No ext/int LB (default) | http://lc:p | https://m[0].aip:sp |
Where: Where:

View file

@ -64,8 +64,9 @@ ndots: 5
# This may be the case if clients support and loadbalance multiple etcd servers natively. # This may be the case if clients support and loadbalance multiple etcd servers natively.
etcd_multiaccess: false etcd_multiaccess: false
# Assume there are no internal loadbalancers for apiservers exist # Assume there are no internal loadbalancers for apiservers exist and listen on
loadbalancer_apiserver_localhost: false # kube_apiserver_port (default 443)
loadbalancer_apiserver_localhost: true
# Choose network plugin (calico, weave or flannel) # Choose network plugin (calico, weave or flannel)
kube_network_plugin: flannel kube_network_plugin: flannel

View file

@ -14,3 +14,6 @@ kube_proxy_masquerade_all: true
# kube_api_runtime_config: # kube_api_runtime_config:
# - extensions/v1beta1/daemonsets=true # - extensions/v1beta1/daemonsets=true
# - extensions/v1beta1/deployments=true # - extensions/v1beta1/deployments=true
nginx_image_repo: nginx
nginx_image_tag: 1.11.4-alpine

View file

@ -1,6 +1,9 @@
--- ---
- include: install.yml - include: install.yml
- include: nginx-proxy.yml
when: is_kube_master == false and loadbalancer_apiserver_localhost|default(false)
- name: Write Calico cni config - name: Write Calico cni config
template: template:
src: "cni-calico.conf.j2" src: "cni-calico.conf.j2"

View file

@ -0,0 +1,9 @@
---
- name: nginx-proxy | Write static pod
template: src=manifests/nginx-proxy.manifest.j2 dest=/etc/kubernetes/manifests/nginx-proxy.yml
- name: nginx-proxy | Make nginx directory
file: path=/etc/nginx state=directory mode=0700 owner=root
- name: nginx-proxy | Write nginx-proxy configuration
template: src=nginx.conf.j2 dest="/etc/nginx/nginx.conf" owner=root mode=0755 backup=yes

View file

@ -0,0 +1,20 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx-proxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: nginx-proxy
image: {{ nginx_image_repo }}:{{ nginx_image_tag }}
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/nginx
name: etc-nginx
readOnly: true
volumes:
- name: etc-nginx
hostPath:
path: /etc/nginx

View file

@ -0,0 +1,26 @@
error_log stderr notice;
worker_processes auto;
events {
multi_accept on;
use epoll;
worker_connections 1024;
}
stream {
upstream kube_apiserver {
least_conn;
{% for host in groups['kube-master'] -%}
server {{ hostvars[host]['access_ip'] | default(hostvars[host]['ip'] | default(hostvars[host]['ansible_default_ipv4']['address'])) }}:{{ kube_apiserver_port }};
{% endfor %}
}
server {
listen {{ kube_apiserver_port }};
proxy_pass kube_apiserver;
proxy_timeout 3s;
proxy_connect_timeout 1s;
}
}

View file

@ -21,6 +21,8 @@ kube_log_dir: "/var/log/kubernetes"
# pods on startup # pods on startup
kube_manifest_dir: "{{ kube_config_dir }}/manifests" kube_manifest_dir: "{{ kube_config_dir }}/manifests"
# change to 0.0.0.0 to enable insecure access from anywhere (not recommended)
kube_apiserver_insecure_bind_address: 127.0.0.1
common_required_pkgs: common_required_pkgs:
- python-httplib2 - python-httplib2

View file

@ -5,12 +5,12 @@
- set_fact: is_kube_master="{{ inventory_hostname in groups['kube-master'] }}" - set_fact: is_kube_master="{{ inventory_hostname in groups['kube-master'] }}"
- set_fact: first_kube_master="{{ hostvars[groups['kube-master'][0]]['access_ip'] | default(hostvars[groups['kube-master'][0]]['ip'] | default(hostvars[groups['kube-master'][0]]['ansible_default_ipv4']['address'])) }}" - set_fact: first_kube_master="{{ hostvars[groups['kube-master'][0]]['access_ip'] | default(hostvars[groups['kube-master'][0]]['ip'] | default(hostvars[groups['kube-master'][0]]['ansible_default_ipv4']['address'])) }}"
- set_fact: - set_fact:
kube_apiserver_insecure_bind_address: |- loadbalancer_apiserver_localhost: false
{% if loadbalancer_apiserver_localhost %}{{ kube_apiserver_address }}{% else %}127.0.0.1{% endif %} when: loadbalancer_apiserver is defined
- set_fact: - set_fact:
kube_apiserver_endpoint: |- kube_apiserver_endpoint: |-
{% if loadbalancer_apiserver_localhost -%} {% if not is_kube_master and loadbalancer_apiserver_localhost -%}
http://127.0.0.1:{{ kube_apiserver_insecure_port }} https://localhost:{{ kube_apiserver_port }}
{%- elif is_kube_master and loadbalancer_apiserver is not defined -%} {%- elif is_kube_master and loadbalancer_apiserver is not defined -%}
http://127.0.0.1:{{ kube_apiserver_insecure_port }} http://127.0.0.1:{{ kube_apiserver_insecure_port }}
{%- else -%} {%- else -%}

View file

@ -26,8 +26,8 @@ Usage : $(basename $0) -f <config> [-d <ssldir>]
-h | --help : Show this message -h | --help : Show this message
-f | --config : Openssl configuration file -f | --config : Openssl configuration file
-d | --ssldir : Directory where the certificates will be installed -d | --ssldir : Directory where the certificates will be installed
ex : ex :
$(basename $0) -f openssl.conf -d /srv/ssl $(basename $0) -f openssl.conf -d /srv/ssl
EOF EOF
} }
@ -37,7 +37,7 @@ while (($#)); do
case "$1" in case "$1" in
-h | --help) usage; exit 0;; -h | --help) usage; exit 0;;
-f | --config) CONFIG=${2}; shift 2;; -f | --config) CONFIG=${2}; shift 2;;
-d | --ssldir) SSLDIR="${2}"; shift 2;; -d | --ssldir) SSLDIR="${2}"; shift 2;;
*) *)
usage usage
echo "ERROR : Unknown option" echo "ERROR : Unknown option"
@ -68,6 +68,7 @@ openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN
openssl genrsa -out apiserver-key.pem 2048 > /dev/null 2>&1 openssl genrsa -out apiserver-key.pem 2048 > /dev/null 2>&1
openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config ${CONFIG} > /dev/null 2>&1 openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config ${CONFIG} > /dev/null 2>&1
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile ${CONFIG} > /dev/null 2>&1 openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile ${CONFIG} > /dev/null 2>&1
cat ca.pem >> apiserver.pem
# Nodes and Admin # Nodes and Admin
for i in node admin; do for i in node admin; do

View file

@ -65,3 +65,30 @@
shell: chmod 0600 {{ kube_cert_dir}}/*key.pem shell: chmod 0600 {{ kube_cert_dir}}/*key.pem
when: inventory_hostname in groups['kube-master'] when: inventory_hostname in groups['kube-master']
changed_when: false changed_when: false
- name: Gen_certs | target ca-certificates directory
set_fact:
ca_cert_dir: |-
{% if ansible_os_family == "Debian" -%}
/usr/local/share/ca-certificates
{%- elif ansible_os_family == "RedHat" -%}
/etc/pki/ca-trust/source/anchors
{%- elif ansible_os_family == "CoreOS" -%}
/etc/ssl/certs
{%- endif %}
- name: Gen_certs | add CA to trusted CA dir
copy:
src: "{{ kube_cert_dir }}/ca.pem"
dest: "{{ ca_cert_dir }}/kube-ca.crt"
remote_src: true
register: kube_ca_cert
- name: Gen_certs | update ca-certificates (Debian/Ubuntu/CoreOS)
command: update-ca-certificates
when: kube_ca_cert.changed and ansible_os_family in ["Debian", "CoreOS"]
- name: Gen_certs | update ca-certificatesa (RedHat)
command: update-ca-trust extract
when: kube_ca_cert.changed and ansible_os_family == "RedHat"

View file

@ -11,12 +11,18 @@ DNS.1 = kubernetes
DNS.2 = kubernetes.default DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.{{ dns_domain }} DNS.4 = kubernetes.default.svc.{{ dns_domain }}
DNS.5 = localhost
{% for host in groups['kube-master'] %}
DNS.{{ 5 + loop.index }} = {{ host }}
{% endfor %}
{% if loadbalancer_apiserver is defined and apiserver_loadbalancer_domain_name is defined %} {% if loadbalancer_apiserver is defined and apiserver_loadbalancer_domain_name is defined %}
DNS.5 = {{ apiserver_loadbalancer_domain_name }} {% set idx = groups['kube-master'] | length | int + 5 %}
DNS.{{ idx | string }} = {{ apiserver_loadbalancer_domain_name }}
{% endif %} {% endif %}
{% for host in groups['kube-master'] %} {% for host in groups['kube-master'] %}
IP.{{ 2 * loop.index - 1 }} = {{ hostvars[host]['access_ip'] | default(hostvars[host]['ansible_default_ipv4']['address']) }} IP.{{ 2 * loop.index - 1 }} = {{ hostvars[host]['access_ip'] | default(hostvars[host]['ansible_default_ipv4']['address']) }}
IP.{{ 2 * loop.index }} = {{ hostvars[host]['ip'] | default(hostvars[host]['ansible_default_ipv4']['address']) }} IP.{{ 2 * loop.index }} = {{ hostvars[host]['ip'] | default(hostvars[host]['ansible_default_ipv4']['address']) }}
{% endfor %} {% endfor %}
{% set idx = groups['kube-master'] | length | int * 2 + 1 %} {% set idx = groups['kube-master'] | length | int * 2 + 1 %}
IP.{{ idx | string }} = {{ kube_apiserver_ip }} IP.{{ idx }} = {{ kube_apiserver_ip }}
IP.{{ idx + 1 }} = 127.0.0.1