Nearly the last stage of source all components to containers.
Kubectl will be called from hyperkube image.
Remaining tasks:
* Move kube_version variable to kubernetes/preinstall
* Drop placeholder download.nothing requirement
kubelet via docker
kube-apiserver as a static pod
Fixed etcd service start to be more tolerant of slow start.
Workaround for kube_version to stay in download role, but not
download an files by creating a new "nothing" download entry.
Adds new boolean configuration variable for calico network plugin
`ipip`. When it's enabled calico pool is created with '--ipip'
option (IP-over-IP encapsulation across hosts).
Also refactor pool creation tasks to simplify logic and make tasks
more readable.
* Add auto-evaluated internal endpoints and clarify the loadbalancer_apiserver
vars and usecases.
* Add loadbalancer_apiserver_localhost (default false). If enabled, override
the external LB and expect localhost:443/8080 to be new internal only frontends.
* Add kube_apiserver_multiaccess to ignore loadbalancers, and make clients
to access the apiservers as a comma-separated list of access_ip/ip/ansible ip
(a default mode). When disabled, allow clients to use the given loadbalancers.
* Define connections security mode for kube controllers, schedulers, proxies.
It is insecure be default, which is the current deployment choice.
* Rework the groups['kube-master'][0] hardcode defining the apiserver
endpoints.
* Improve grouping of vars and add facts for kube_apiserver.
* Define kube_apiserver_insecure_bind_address as a fact, add more
facts for ease of use.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Improved docker reload command to wait for etcd to be
up before proceeding. Switched reload to run restart
because it can't reload if it is not guaranteed to be
in running state.
Move set_facts to the preinstall scope, so every role
may see it. For example, network plugins to see the etcd_endpoint.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
* Enforce a etcd-proxy role to a k8s-cluster group members. This
provides an HA layout for all of the k8s cluster internal clients.
* Proxies to be run on each node in the group as a separate etcd
instances with a readwrite proxy mode and listen the given endpoint,
which is either the access_ip:2379 or the localhost:2379.
* A notion for the 'kube_etcd_multiaccess' is: ignore endpoints and
loadbalancers and use the etcd members IPs as a comma-separated
list. Otherwise, clients shall use the local endpoint provided by a
etcd-proxy instances on each etcd node. A Netwroking plugins always
use that access mode.
* Fix apiserver's etcd servers args to use the etcd_access_endpoint.
* Fix networking plugins flannel/calico to use the etcd_endpoint.
* Fix name env var for non masters to be set as well.
* Fix etcd_client_url was not used anywhere and other etcd_* facts
evaluation was duplicated in a few places.
* Define proxy modes only in the env file, if not a master. Del
an automatic proxy mode decisions for etcd nodes in init/unit scripts.
* Use Wants= instead of Requires= as "This is the recommended way to
hook start-up of one unit to the start-up of another unit"
* Make apiserver/calico Wants= etcd-proxy to keep it always up
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Co-authored-by: Matthew Mosesohn <mmosesohn@mirantis.com>
Running etcd in Docker reduces the number of individual file
downloads and services running on the host.
Note: etcd container v3.0.1 moves bindir to /usr/local/bin
Fixes: #298
* Set ETCD_INITIAL_CLUSTER_STATE from `new` to `existing`,
because parameter `new` makes sense only on cluster assembly
stage.
* If cluster exists and current node is not a part
of the cluster, add it with command `etcdctl add member name url`.
Closes kubespray/kargo/#270
This should make things a little more composable,
by making these roles meta roles that perform no
actions by default we allow each role to own its own
resources.
Kubernetes API server has an option:
```
--advertise-address=<nil>: The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
```
kargo does not set --bind-address, thus it binds to eth0, in vagrant and similar
environments this causes issues because nodes cannot talk to eachother over eth0.
This sets `--advertise-address` to `ip` if its set, otherwise the default behavior
of is persisted by using `ansible_default_ipv4.address`.
using a shared folder can cause race conditions for the download
role as it tries to download files on all the nodes to the same
shared path. This adds a flag to run the tasks in the download
role on just one node.
check_certs task "Check_certs | Set 'sync_certs' to true" was failing
due to the dict not existing, this sets defaults that allows the
correct behavior of the conditionals.
This allows you to simply run `vagrant up` to get a 3 node HA cluster.
* Creates a dynamic inventory and uses the inventory/group_vars/all.yml
* commented lines in inventory.example so that ansible doesn't try to use it.
* added requirements.txt to give easy way to install ansible/ipaddr
* added gitignore files to stop attempts to save unwated files
* changed `Check if kube-system exists` to `failed_when: false` instead of
`ignore_errors`
On CoreOS where there is no package management, perform zero-trip
loops instead of throwing an exception for iterating over a member
of an undefined variable.
When kubespray is deployed on OpenStack, the kube-controller-manager is now aware of the cluster and can create new cinder volumes automatically if the PersistentVolumeClaims are annotated accordingly.
Note that this is an alpha feature of kubernetes 1.2
Currently kubespray does not install kubernetes in a way that allows cinder volumes to be used. This commit provides the necessary cloud configuration file and configures kubelet and kube-apiserver to use it.
A freshly-installed CoreOS system does not always have a hostname configured.
This causes problems for etcd and BGP mesh configuration for Calico.
Assign the Ansible inventory name as hostname as part of CoreOS bootstrap,
if the hostname is the default ("localhost").
variables.
1. AWS has issues with ext4 (use xfs instead for CentOS only)
2. Make sure all the centos config files are include in the systemd config
3. Make sure that network options are set in the correct file by os family
This allows downstream items like opencontrail and others change variables
in expected locations.
Each node can have 3 IPs.
1. ansible_default_ip4 - whatever ansible things is the first IPv4 address
usually with the default gw.
2. ip - An address to use on the local node to bind listeners and do local
communication. For example, Vagrant boxes have a first address that is the
NAT bridge and is common for all nodes. The second address/interface should
be used.
3. access_ip - An address to use for node-to-node access. This is assumed to
be used by other nodes to access the node and may not be actually assigned
on the node. For example, AWS public ip that is not assigned to node.
This updates the places addresses are used to use either ip or access_ip and walk
up the list to find an address.
fix etcd configuration for nodes
fix wrong calico checksums
using a var name etcd_bin_dir
fix etcd handlers for sysvinit
using a var name etcd_bin_dir
sysvinit script
review etcd configuration