2016-07-22 12:42:44 +00:00
|
|
|
HA endpoints for K8s
|
|
|
|
====================
|
|
|
|
|
|
|
|
The following components require a highly available endpoints:
|
|
|
|
* etcd cluster,
|
|
|
|
* kube-apiserver service instances.
|
|
|
|
|
|
|
|
The latter relies on a 3rd side reverse proxies, like Nginx or HAProxy, to
|
|
|
|
achieve the same goal.
|
|
|
|
|
|
|
|
Etcd
|
|
|
|
----
|
|
|
|
|
|
|
|
The `etcd_access_endpoint` fact provides an access pattern for clients. And the
|
2016-12-27 15:39:00 +00:00
|
|
|
`etcd_multiaccess` (defaults to `True`) group var controlls that behavior.
|
|
|
|
It makes deployed components to access the etcd cluster members
|
2016-07-22 12:42:44 +00:00
|
|
|
directly: `http://ip1:2379, http://ip2:2379,...`. This mode assumes the clients
|
|
|
|
do a loadbalancing and handle HA for connections. Note, a pod definition of a
|
|
|
|
flannel networking plugin always uses a single `--etcd-server` endpoint!
|
|
|
|
|
|
|
|
|
|
|
|
Kube-apiserver
|
|
|
|
--------------
|
2016-07-13 15:13:47 +00:00
|
|
|
|
|
|
|
K8s components require a loadbalancer to access the apiservers via a reverse
|
2016-09-28 11:05:08 +00:00
|
|
|
proxy. Kargo includes support for an nginx-based proxy that resides on each
|
|
|
|
non-master Kubernetes node. This is referred to as localhost loadbalancing. It
|
|
|
|
is less efficient than a dedicated load balancer because it creates extra
|
|
|
|
health checks on the Kubernetes apiserver, but is more practical for scenarios
|
2017-01-17 19:15:48 +00:00
|
|
|
where an external LB or virtual IP management is inconvenient. This option is
|
2016-12-27 15:39:00 +00:00
|
|
|
configured by the variable `loadbalancer_apiserver_localhost` (defaults to `False`).
|
|
|
|
You may also define the port the local internal loadbalancer users by changing,
|
2017-01-17 19:15:48 +00:00
|
|
|
`nginx_kube_apiserver_port`. This defaults to the value of `kube_apiserver_port`.
|
|
|
|
It is also import to note that Kargo will only configure kubelet and kube-proxy
|
|
|
|
on non-master nodes to use the local internal loadbalancer.
|
|
|
|
|
|
|
|
If you choose to NOT use the local internal loadbalancer, you will need to configure
|
|
|
|
your own loadbalancer to achieve HA. Note that deploying a loadbalancer is up to
|
|
|
|
a user and is not covered by ansible roles in Kargo. By default, it only configures
|
|
|
|
a non-HA endpoint, which points to the `access_ip` or IP address of the first server
|
|
|
|
node in the `kube-master` group. It can also configure clients to use endpoints
|
|
|
|
for a given loadbalancer type. The following diagram shows how traffic to the
|
|
|
|
apiserver is directed.
|
2016-09-28 11:05:08 +00:00
|
|
|
|
|
|
|
![Image](figures/loadbalancer_localhost.png?raw=true)
|
2016-07-13 15:13:47 +00:00
|
|
|
|
2016-10-17 13:42:30 +00:00
|
|
|
Note: Kubernetes master nodes still use insecure localhost access because
|
2016-09-28 11:05:08 +00:00
|
|
|
there are bugs in Kubernetes <1.5.0 in using TLS auth on master role
|
2016-10-17 13:42:30 +00:00
|
|
|
services. This makes backends receiving unencrypted traffic and may be a
|
|
|
|
security issue when interconnecting different nodes, or maybe not, if those
|
|
|
|
belong to the isolated management network without external access.
|
2016-09-28 11:05:08 +00:00
|
|
|
|
|
|
|
A user may opt to use an external loadbalancer (LB) instead. An external LB
|
2016-07-13 15:13:47 +00:00
|
|
|
provides access for external clients, while the internal LB accepts client
|
2016-11-09 10:31:12 +00:00
|
|
|
connections only to the localhost.
|
2016-07-13 15:13:47 +00:00
|
|
|
Given a frontend `VIP` address and `IP1, IP2` addresses of backends, here is
|
|
|
|
an example configuration for a HAProxy service acting as an external LB:
|
|
|
|
```
|
|
|
|
listen kubernetes-apiserver-https
|
|
|
|
bind <VIP>:8383
|
|
|
|
option ssl-hello-chk
|
|
|
|
mode tcp
|
|
|
|
timeout client 3h
|
|
|
|
timeout server 3h
|
|
|
|
server master1 <IP1>:443
|
|
|
|
server master2 <IP2>:443
|
|
|
|
balance roundrobin
|
|
|
|
```
|
|
|
|
|
|
|
|
And the corresponding example global vars config:
|
|
|
|
```
|
|
|
|
apiserver_loadbalancer_domain_name: "lb-apiserver.kubernetes.local"
|
|
|
|
loadbalancer_apiserver:
|
|
|
|
address: <VIP>
|
|
|
|
port: 8383
|
|
|
|
```
|
|
|
|
|
|
|
|
This domain name, or default "lb-apiserver.kubernetes.local", will be inserted
|
|
|
|
into the `/etc/hosts` file of all servers in the `k8s-cluster` group. Note that
|
|
|
|
the HAProxy service should as well be HA and requires a VIP management, which
|
2016-10-17 13:42:30 +00:00
|
|
|
is out of scope of this doc. Specifying an external LB overrides any internal
|
|
|
|
localhost LB configuration.
|
2016-07-13 15:13:47 +00:00
|
|
|
|
2016-10-17 13:42:30 +00:00
|
|
|
Note: In order to achieve HA for HAProxy instances, those must be running on
|
|
|
|
the each node in the `k8s-cluster` group as well, but require no VIP, thus
|
|
|
|
no VIP management.
|
2016-07-13 15:13:47 +00:00
|
|
|
|
|
|
|
Access endpoints are evaluated automagically, as the following:
|
|
|
|
|
|
|
|
| Endpoint type | kube-master | non-master |
|
|
|
|
|------------------------------|---------------|---------------------|
|
2017-01-17 19:15:48 +00:00
|
|
|
| Local LB | http://lc:p | https://lc:nsp |
|
2016-10-17 13:42:30 +00:00
|
|
|
| External LB, no internal | https://lb:lp | https://lb:lp |
|
2016-07-13 15:13:47 +00:00
|
|
|
| No ext/int LB (default) | http://lc:p | https://m[0].aip:sp |
|
|
|
|
|
|
|
|
Where:
|
|
|
|
* `m[0]` - the first node in the `kube-master` group;
|
|
|
|
* `lb` - LB FQDN, `apiserver_loadbalancer_domain_name`;
|
|
|
|
* `lc` - localhost;
|
|
|
|
* `p` - insecure port, `kube_apiserver_insecure_port`
|
2017-01-17 19:15:48 +00:00
|
|
|
* `nsp` - nginx secure port, `nginx_kube_apiserver_port`;
|
2016-07-13 15:13:47 +00:00
|
|
|
* `sp` - secure port, `kube_apiserver_port`;
|
|
|
|
* `lp` - LB port, `loadbalancer_apiserver.port`, defers to the secure port;
|
|
|
|
* `ip` - the node IP, defers to the ansible IP;
|
|
|
|
* `aip` - `access_ip`, defers to the ip.
|
2017-01-17 19:15:48 +00:00
|
|
|
|