* Add dns_replicas, dns_memory/cpu_limit/requests vars for
dns related apps.
* When kube_log_level=4, log dnsmasq queries as well.
* Add log level control for skydns (part of kubedns app).
* Add limits/requests vars for dnsmasq (part of kubedns app) and
dnsmasq daemon set.
* Drop string defaults for kube_log_level as it is int and
is defined in the global vars as well.
* Add docs
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
When download_run_once with download_localhost is used, docker is
expected to be running on the delegate localhost. That may be not
the case for a non localhost delegate, which is the kube-master
otherwise. Then the dnsmasq role, had it been invoked early before
deployment starts, would fail because of the missing docker dependency.
* Fix that dependency on docker and do not pre download dnsmasq image
for the dnsmasq role, if download_localhost is disabled.
* Remove become: false for docker CLI invocation because that's not
the common pattern to allow users access docker CLI w/o sudo.
* Fix opt bin path hack for localhost delegate to ignore errors when
it fails with "sudo password required" otherwise.
* Describe download_run_once with download_localhost use case in docs
as well.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Use cloud-init config to replace /etc/resolv.conf with the
content for kubelet to properly configure hostnet pods.
Do not use systemd-resolved yet, see
https://coreos.com/os/docs/latest/configuring-dns.html
"Only nss-aware applications can take advantage of the
systemd-resolved cache. Notably, this means that statically
linked Go programs and programs running within Docker/rkt
will use /etc/resolv.conf only, and will not use the
systemd-resolve cache."
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
W/o this patch, the "Download containers" task may be skipped
when running on the delegate node due to wrong "when" confition.
Then it fails to upload nginx image to the nodes as well.
Fix download nginx dependency so it always can be pushed to
nodes when download_run_once is enabled.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
When setting permission for containers download/upload dir we're
using `ansible_ssh_user`. But if playbook is executed without
user being explicitly set `ansible_ssh_user` may be undefined.
In such situations dir ownership will default to `ansible_user_id`
Closes: #644
According to http://kubernetes.io/docs/user-guide/images/ :
By default, the kubelet will try to pull each image from the
specified registry. However, if the imagePullPolicy property
of the container is set to IfNotPresent or Never, then a local\
image is used (preferentially or exclusively, respectively).
Use IfNotPresent value to allow images prepared by the download
role dependencies to be effectively used by kubelet without pull
errors resulting apps to stay blocked in PullBackOff/Error state
even when there are images on the localhost exist.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Pre download all required container images as roles' deps.
Drop unused flannel-server-helper images pre download.
Improve pods creation post-install test pre downloaded busybox.
Improve logs collection script with kubectl describe, fix sudo/etcd/weave
commands.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Fix unreliable waiting for the apiserver to become ready.
Remove logfile mount to align with the rest of static pods
and because containers shall write logs to stdout only.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
in the etcd handler, the reload etcd action
was called after ansible waits for etcd to be
up, this means that the health checks which are
called immediately after fail (resulting in the etcd
role always failing and never finishing)
This patch changes the order to move the 'wait for etcd
up' resource after the 'reload etcd resource', ensuring that
the service is up before the health check is called.
* Add download_localhost for the download_run_once mode, which is
use the ansible host (a travis node for CI case) to store and
distribute containers across cluster nodes in inventory.
Defaults to false.
* Rework download_run_once logic to fix idempotency of uploading
containers.
* For Travis CI, enable docker images caching and run Travis
workers with sudo enabled as a dependency
* For Travis CI, deploy with download_localhost and download_run_once
enabled to shourten dev path drastically.
* Add compression for saved container images. Defaults to 'best'.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Co-authored-by: Aleksandr Didenko <adidenko@mirantis.com>
Add one more step (task) to containers download/upload sequence -
copy saved .tar containers to ansible host (delegate_to: localhost).
Then upload images to target nodes. It uses synchronize module so
if ansible host (localhost) is the same host as kube-master[0] then
new task causes no issues and the copy to localhost process is
basically skipped.
- Move CNI configuration from `kubernetes/node` role to
`network_plugin/canal`
- Create SSL dir for Canal and symlink etcd SSL files
- Add needed options to `canal-config` configmap
- Run flannel and calico-node containers with proper configuration
Since version 'v1.0.0-beta' calicoctl is written
in Go and its API differs from old Python based
utility. Added support of both old and new version
of the utility.
- Drop debugs from collect-info playbook
- Drop sudo from collect-info step and add target dir var (required for travis jobs)
- Label all k8s apps, including static manifests
- Add logs for K8s apps to be collected as well
- Fix upload to GCS as a public-read tarball
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
'etcd_cert_dir' variable is missing from 'kubernetes-apps/ansible'
role which breaks Calico policy controller deployment.
Also fixing calico-policy-controller.yml.
Related bug: https://github.com/ansible/ansible/issues/15405
Uses tar and register because synchronize module cannot sudo on the
remote side correctly and copy is too slow.
This patch dramatically cuts down the number of tasks to process
for cert synchronization.
* Don't push containers if not changed
* Do preinstall role only once and redistribute defaults to
corresponding roles
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Change the kubelet --hostname-override flag to use the ansible_hostname variable which should be more consistent with the value required by cloud providers
Add ansible_hostname alias to /etc/hosts when it is different from inventory_hostname to overcome node name limitations see https://github.com/kubernetes/kubernetes/issues/22770
Signed-off-by: Chad Swenson <chadswen@gmail.com>
test to change the machine type
Revert "test to change the machine type"
This reverts commit 7a91f1b5405a39bee6cb91940b09a0b0f9d3aee1.
use google dns server when no upstream dns are defined
comment upstream_dns_servers
update documentation
remove deprecated kubelet flags
Revert "remove deprecated kubelet flags"
This reverts commit 21e3b893c896d0291c36a07d0414f4cb88b8d8ac.
The requirements for network policy feature are described here [1]. In
order to enable it, appropriate configuration must be provided to the CNI
plug in and Calico policy controller must be set up. Beside that
corresponding extensions needed to be enabled in k8s API.
Now to turn on the feature user can define `enable_network_policy`
customization variable for Ansible.
[1] http://kubernetes.io/docs/user-guide/networkpolicies/
Also adds all masters by hostname and localhost/127.0.0.1 to
apiserver SSL certificate.
Includes documentation update on how localhost loadbalancer works.
Initially this was removed, but it turns out that services that
perform reverse lookups (such as MariaDB) will encounter severe
performance degredation with this disabled.
- Allow to overwrite calico cni binaries copied from hyperkube
by the custom ones.
- Fix calico-ipam deployment (it had wrong source in rsync)
- Make copy from hyperkube idempotent (use rsync instead of cp)
- Remove some orphaned comments
* Add a var for ndots (default 5) and put it hosts' /etc/resolv.conf.
* Poke kube dns container image to v1.7
* In order to apply changes to kubelet, notify it to
be restarted on changes made to /etc/resolv.conf. Ignore errors as the kubelet
may yet to be present up to the moment of the notification being processed.
* Remove unnecessary kubelet restart for master role as the node role ensures
it is up and running. Notify master static pods waiters for apiserver,
scheduler, controller-manager instead.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
"else omit" is causing problems in this expression. Replacing
it with more strict "inventory_hostname" fixes the issue and
handles `download_run_once` as expected.
Closes issue #514
- Update docs and a drawing to clarify DNS setup.
- Change order of nameservers placement to match
changes in https://github.com/kubespray/kargo/pull/501
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Change additional dnsmasq opts:
- Adjust caching size and TTL
- Disable resolve conf to not create loops
- Change dnsPolicy to default (similarly to kubedns's dnsmasq). The
ClusterFirst should not be used to not create loops
- Disable negative NXDOMAIN replies to be cached
- Make its very installation as optional step (enabled by default).
If you don't want more than 3 DNS servers, including 1 for K8s, disable
it.
- Add docs and a drawing to clarify DNS setup.
- Fix stdout logs for dnsmasq/kubedns app configs
- Add missed notifies to resolvconf -u handler
- Fix idempotency of resolvconf head file changes
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
* Add the retry_stagger var to tweak push and retry time strategies.
* Add large deployments related docs.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Move version/repo vars to download role.
Add container to download params, which overrides url/source_url,
if enabled.
Fix networking plugins download depending on kube_network_plugin.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Removed api-version from kube.py because it is deprecated.
Updating both kube.py because dnsmasq one is actually used.
Fixed name back to kubedns for checking its resource.
Move updating resolvconf to the network restart handler to
ensure changes applied to the /etc/resolv.conf.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Shorten deployment time with:
- Remove redundand roles if duplicated by a dependency and vice versa
- When a member of k8s-cluster, always install docker as a dependency
of the etcd role and drop the docker role from cluster.yaml.
- Drop etcd and node role dependencies from master role as they are
covered by the node role in k8s-cluster group as well. Copy defaults
for master from node role.
- Decouple master, node, secrets roles handlers and vars to be used w/o
cross references.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
* Ensure additional nameserver/search, if defined as vars.
* Don't backup changed dhclient hooks as they are going to be
executed by dhclient as well, which is not what we want.
* For debian OS family only:
- Rename nodnsupdate hook the resolvconf hook to be sourced always
before it.
- Ensure dhclient restarted via network restart to apply the
nodnsupdate hook.
* For rhel OS family, the fix TBD, it doesn't work the same way.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Rename nodnsupdate hook the resolvconf hook to be sourced always
before it.
Ensure dhclient restarted via network restart to apply the
nodnsupdate hook.
Ensure additional nameserver/search, if defined as vars.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
If resolvconf was installed and then removed, the file
/etc/resolvconf/resolv.conf.d/head remains in the filesystem
- change discovery of 'resolvconf' executable to check if it
can be located with 'which resolvconf' command or not.
Hyperkube from CoreOS now ships with all binaries required for
calico and flannel (but not weave). It simplifies deployment for
some network plugin scenarios to not download CNI images.
TODO: Optionally disable downloading calico to /opt/cni/bin
Creating the unit using default settings early on
and then changing it during network_plugin section
leads to too many docker restarts and duplicated code.
Reversed Wants= dependence on docker.service so it does not
restart docker when reloading systemd
Consolidated all docker restart handlers.
* Add for docker system units:
ExecReload=/bin/kill -s HUP $MAINPID
Delegate=yes
KillMode=process.
* Add missed DOCKER_OPTIONS for calico/weave docker systemd unit.
* Change Requires= to a less strict and non-faily Wants=, add missing
Wants= for After=.
* Align wants/after in a wat if Wants=foo, After= has foo as well.
* Make wants/after docker.service to ask for the docker.socket as well.
* Move "docker rm -f" commands from ExecStartPre= to ExecStopPost=.
hooks to ensure non-destructive start attempts issued by Wants=.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
etcd facts are generated in kubernetes/preinstall, so etcd nodes need
to be evaluated first before the rest of the deployment.
Moved several directory facts from kubernetes/node to
kubernetes/preinstall because they are not backward dependent.
* Add HA docs for API server.
* Add auto-evaluated internal endpoints and clarify the loadbalancer_apiserver
vars and usecases.
* Use facts for kube_apiserver to not repeat code and enable LB endpoints use.
* Use /healthz check for the wait-for apiserver.
* Use the single endpoint for kubelet instead of the list of apiservers
* Specify kube_apiserver_count to for HA layout
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Nearly the last stage of source all components to containers.
Kubectl will be called from hyperkube image.
Remaining tasks:
* Move kube_version variable to kubernetes/preinstall
* Drop placeholder download.nothing requirement
kubelet via docker
kube-apiserver as a static pod
Fixed etcd service start to be more tolerant of slow start.
Workaround for kube_version to stay in download role, but not
download an files by creating a new "nothing" download entry.
Adds new boolean configuration variable for calico network plugin
`ipip`. When it's enabled calico pool is created with '--ipip'
option (IP-over-IP encapsulation across hosts).
Also refactor pool creation tasks to simplify logic and make tasks
more readable.
* Add auto-evaluated internal endpoints and clarify the loadbalancer_apiserver
vars and usecases.
* Add loadbalancer_apiserver_localhost (default false). If enabled, override
the external LB and expect localhost:443/8080 to be new internal only frontends.
* Add kube_apiserver_multiaccess to ignore loadbalancers, and make clients
to access the apiservers as a comma-separated list of access_ip/ip/ansible ip
(a default mode). When disabled, allow clients to use the given loadbalancers.
* Define connections security mode for kube controllers, schedulers, proxies.
It is insecure be default, which is the current deployment choice.
* Rework the groups['kube-master'][0] hardcode defining the apiserver
endpoints.
* Improve grouping of vars and add facts for kube_apiserver.
* Define kube_apiserver_insecure_bind_address as a fact, add more
facts for ease of use.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Improved docker reload command to wait for etcd to be
up before proceeding. Switched reload to run restart
because it can't reload if it is not guaranteed to be
in running state.
Move set_facts to the preinstall scope, so every role
may see it. For example, network plugins to see the etcd_endpoint.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
* Enforce a etcd-proxy role to a k8s-cluster group members. This
provides an HA layout for all of the k8s cluster internal clients.
* Proxies to be run on each node in the group as a separate etcd
instances with a readwrite proxy mode and listen the given endpoint,
which is either the access_ip:2379 or the localhost:2379.
* A notion for the 'kube_etcd_multiaccess' is: ignore endpoints and
loadbalancers and use the etcd members IPs as a comma-separated
list. Otherwise, clients shall use the local endpoint provided by a
etcd-proxy instances on each etcd node. A Netwroking plugins always
use that access mode.
* Fix apiserver's etcd servers args to use the etcd_access_endpoint.
* Fix networking plugins flannel/calico to use the etcd_endpoint.
* Fix name env var for non masters to be set as well.
* Fix etcd_client_url was not used anywhere and other etcd_* facts
evaluation was duplicated in a few places.
* Define proxy modes only in the env file, if not a master. Del
an automatic proxy mode decisions for etcd nodes in init/unit scripts.
* Use Wants= instead of Requires= as "This is the recommended way to
hook start-up of one unit to the start-up of another unit"
* Make apiserver/calico Wants= etcd-proxy to keep it always up
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Co-authored-by: Matthew Mosesohn <mmosesohn@mirantis.com>
Running etcd in Docker reduces the number of individual file
downloads and services running on the host.
Note: etcd container v3.0.1 moves bindir to /usr/local/bin
Fixes: #298
* Set ETCD_INITIAL_CLUSTER_STATE from `new` to `existing`,
because parameter `new` makes sense only on cluster assembly
stage.
* If cluster exists and current node is not a part
of the cluster, add it with command `etcdctl add member name url`.
Closes kubespray/kargo/#270
This should make things a little more composable,
by making these roles meta roles that perform no
actions by default we allow each role to own its own
resources.
Kubernetes API server has an option:
```
--advertise-address=<nil>: The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
```
kargo does not set --bind-address, thus it binds to eth0, in vagrant and similar
environments this causes issues because nodes cannot talk to eachother over eth0.
This sets `--advertise-address` to `ip` if its set, otherwise the default behavior
of is persisted by using `ansible_default_ipv4.address`.
using a shared folder can cause race conditions for the download
role as it tries to download files on all the nodes to the same
shared path. This adds a flag to run the tasks in the download
role on just one node.
check_certs task "Check_certs | Set 'sync_certs' to true" was failing
due to the dict not existing, this sets defaults that allows the
correct behavior of the conditionals.
This allows you to simply run `vagrant up` to get a 3 node HA cluster.
* Creates a dynamic inventory and uses the inventory/group_vars/all.yml
* commented lines in inventory.example so that ansible doesn't try to use it.
* added requirements.txt to give easy way to install ansible/ipaddr
* added gitignore files to stop attempts to save unwated files
* changed `Check if kube-system exists` to `failed_when: false` instead of
`ignore_errors`
On CoreOS where there is no package management, perform zero-trip
loops instead of throwing an exception for iterating over a member
of an undefined variable.
When kubespray is deployed on OpenStack, the kube-controller-manager is now aware of the cluster and can create new cinder volumes automatically if the PersistentVolumeClaims are annotated accordingly.
Note that this is an alpha feature of kubernetes 1.2
Currently kubespray does not install kubernetes in a way that allows cinder volumes to be used. This commit provides the necessary cloud configuration file and configures kubelet and kube-apiserver to use it.
A freshly-installed CoreOS system does not always have a hostname configured.
This causes problems for etcd and BGP mesh configuration for Calico.
Assign the Ansible inventory name as hostname as part of CoreOS bootstrap,
if the hostname is the default ("localhost").