Move updating resolvconf to the network restart handler to
ensure changes applied to the /etc/resolv.conf.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Shorten deployment time with:
- Remove redundand roles if duplicated by a dependency and vice versa
- When a member of k8s-cluster, always install docker as a dependency
of the etcd role and drop the docker role from cluster.yaml.
- Drop etcd and node role dependencies from master role as they are
covered by the node role in k8s-cluster group as well. Copy defaults
for master from node role.
- Decouple master, node, secrets roles handlers and vars to be used w/o
cross references.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
* Ensure additional nameserver/search, if defined as vars.
* Don't backup changed dhclient hooks as they are going to be
executed by dhclient as well, which is not what we want.
* For debian OS family only:
- Rename nodnsupdate hook the resolvconf hook to be sourced always
before it.
- Ensure dhclient restarted via network restart to apply the
nodnsupdate hook.
* For rhel OS family, the fix TBD, it doesn't work the same way.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Rename nodnsupdate hook the resolvconf hook to be sourced always
before it.
Ensure dhclient restarted via network restart to apply the
nodnsupdate hook.
Ensure additional nameserver/search, if defined as vars.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
If resolvconf was installed and then removed, the file
/etc/resolvconf/resolv.conf.d/head remains in the filesystem
- change discovery of 'resolvconf' executable to check if it
can be located with 'which resolvconf' command or not.
Hyperkube from CoreOS now ships with all binaries required for
calico and flannel (but not weave). It simplifies deployment for
some network plugin scenarios to not download CNI images.
TODO: Optionally disable downloading calico to /opt/cni/bin
Creating the unit using default settings early on
and then changing it during network_plugin section
leads to too many docker restarts and duplicated code.
Reversed Wants= dependence on docker.service so it does not
restart docker when reloading systemd
Consolidated all docker restart handlers.
* Add for docker system units:
ExecReload=/bin/kill -s HUP $MAINPID
Delegate=yes
KillMode=process.
* Add missed DOCKER_OPTIONS for calico/weave docker systemd unit.
* Change Requires= to a less strict and non-faily Wants=, add missing
Wants= for After=.
* Align wants/after in a wat if Wants=foo, After= has foo as well.
* Make wants/after docker.service to ask for the docker.socket as well.
* Move "docker rm -f" commands from ExecStartPre= to ExecStopPost=.
hooks to ensure non-destructive start attempts issued by Wants=.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
etcd facts are generated in kubernetes/preinstall, so etcd nodes need
to be evaluated first before the rest of the deployment.
Moved several directory facts from kubernetes/node to
kubernetes/preinstall because they are not backward dependent.
* Add HA docs for API server.
* Add auto-evaluated internal endpoints and clarify the loadbalancer_apiserver
vars and usecases.
* Use facts for kube_apiserver to not repeat code and enable LB endpoints use.
* Use /healthz check for the wait-for apiserver.
* Use the single endpoint for kubelet instead of the list of apiservers
* Specify kube_apiserver_count to for HA layout
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Nearly the last stage of source all components to containers.
Kubectl will be called from hyperkube image.
Remaining tasks:
* Move kube_version variable to kubernetes/preinstall
* Drop placeholder download.nothing requirement
kubelet via docker
kube-apiserver as a static pod
Fixed etcd service start to be more tolerant of slow start.
Workaround for kube_version to stay in download role, but not
download an files by creating a new "nothing" download entry.
Adds new boolean configuration variable for calico network plugin
`ipip`. When it's enabled calico pool is created with '--ipip'
option (IP-over-IP encapsulation across hosts).
Also refactor pool creation tasks to simplify logic and make tasks
more readable.
* Add auto-evaluated internal endpoints and clarify the loadbalancer_apiserver
vars and usecases.
* Add loadbalancer_apiserver_localhost (default false). If enabled, override
the external LB and expect localhost:443/8080 to be new internal only frontends.
* Add kube_apiserver_multiaccess to ignore loadbalancers, and make clients
to access the apiservers as a comma-separated list of access_ip/ip/ansible ip
(a default mode). When disabled, allow clients to use the given loadbalancers.
* Define connections security mode for kube controllers, schedulers, proxies.
It is insecure be default, which is the current deployment choice.
* Rework the groups['kube-master'][0] hardcode defining the apiserver
endpoints.
* Improve grouping of vars and add facts for kube_apiserver.
* Define kube_apiserver_insecure_bind_address as a fact, add more
facts for ease of use.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Improved docker reload command to wait for etcd to be
up before proceeding. Switched reload to run restart
because it can't reload if it is not guaranteed to be
in running state.
Move set_facts to the preinstall scope, so every role
may see it. For example, network plugins to see the etcd_endpoint.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
* Enforce a etcd-proxy role to a k8s-cluster group members. This
provides an HA layout for all of the k8s cluster internal clients.
* Proxies to be run on each node in the group as a separate etcd
instances with a readwrite proxy mode and listen the given endpoint,
which is either the access_ip:2379 or the localhost:2379.
* A notion for the 'kube_etcd_multiaccess' is: ignore endpoints and
loadbalancers and use the etcd members IPs as a comma-separated
list. Otherwise, clients shall use the local endpoint provided by a
etcd-proxy instances on each etcd node. A Netwroking plugins always
use that access mode.
* Fix apiserver's etcd servers args to use the etcd_access_endpoint.
* Fix networking plugins flannel/calico to use the etcd_endpoint.
* Fix name env var for non masters to be set as well.
* Fix etcd_client_url was not used anywhere and other etcd_* facts
evaluation was duplicated in a few places.
* Define proxy modes only in the env file, if not a master. Del
an automatic proxy mode decisions for etcd nodes in init/unit scripts.
* Use Wants= instead of Requires= as "This is the recommended way to
hook start-up of one unit to the start-up of another unit"
* Make apiserver/calico Wants= etcd-proxy to keep it always up
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Co-authored-by: Matthew Mosesohn <mmosesohn@mirantis.com>
Running etcd in Docker reduces the number of individual file
downloads and services running on the host.
Note: etcd container v3.0.1 moves bindir to /usr/local/bin
Fixes: #298
* Set ETCD_INITIAL_CLUSTER_STATE from `new` to `existing`,
because parameter `new` makes sense only on cluster assembly
stage.
* If cluster exists and current node is not a part
of the cluster, add it with command `etcdctl add member name url`.
Closes kubespray/kargo/#270
This should make things a little more composable,
by making these roles meta roles that perform no
actions by default we allow each role to own its own
resources.
Kubernetes API server has an option:
```
--advertise-address=<nil>: The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
```
kargo does not set --bind-address, thus it binds to eth0, in vagrant and similar
environments this causes issues because nodes cannot talk to eachother over eth0.
This sets `--advertise-address` to `ip` if its set, otherwise the default behavior
of is persisted by using `ansible_default_ipv4.address`.
using a shared folder can cause race conditions for the download
role as it tries to download files on all the nodes to the same
shared path. This adds a flag to run the tasks in the download
role on just one node.
check_certs task "Check_certs | Set 'sync_certs' to true" was failing
due to the dict not existing, this sets defaults that allows the
correct behavior of the conditionals.
This allows you to simply run `vagrant up` to get a 3 node HA cluster.
* Creates a dynamic inventory and uses the inventory/group_vars/all.yml
* commented lines in inventory.example so that ansible doesn't try to use it.
* added requirements.txt to give easy way to install ansible/ipaddr
* added gitignore files to stop attempts to save unwated files
* changed `Check if kube-system exists` to `failed_when: false` instead of
`ignore_errors`
On CoreOS where there is no package management, perform zero-trip
loops instead of throwing an exception for iterating over a member
of an undefined variable.
When kubespray is deployed on OpenStack, the kube-controller-manager is now aware of the cluster and can create new cinder volumes automatically if the PersistentVolumeClaims are annotated accordingly.
Note that this is an alpha feature of kubernetes 1.2
Currently kubespray does not install kubernetes in a way that allows cinder volumes to be used. This commit provides the necessary cloud configuration file and configures kubelet and kube-apiserver to use it.