* Bump tag for upgrade CI, fix netchecker upgrade
netchecker-server was changed from pod to deployment, so
we need an upgrade hook for it.
CI now uses v2.1.1 as a basis for upgrade.
* Fix upgrades for certs from non-rbac to rbac
This does not address per-node certs and scheduler/proxy/controller-manager
component certs which are now required. This should be handled in a
follow-up patch.
Making fluentd.conf as configmap to change configuration.
Change elasticsearch rc to deployment.
Having installed previous elastaicsearch as rc, first should delete that.
* Make yum repos used for installing docker rpms configurable
* TasksMax is only supported in systemd version >= 226
* Change to systemd file should restart docker
Before restarting docker, instruct it to kill running
containers when it restarts.
Needs a second docker restart after we restore the original
behavior, otherwise the next time docker is restarted by
an operator, it will unexpectedly bring down all running
containers.
In atomic, containers are left running when docker is restarted.
When docker is restarted after the flannel config is put in place,
the docker0 interface isn't re-IPed because docker sees the running
containers and won't update the previous config.
This patch kills all the running containers after docker is stopped.
We can't simply `docker stop` the running containers, as they respawn
before we've got a chance to stop the docker daemon, so we need to
use runc to do this after dockerd is stopped.
Replace 'netcheck_tag' with 'netcheck_version' and add additional
'netcheck_server_tag' and 'netcheck_agent_tag' config options to
provide ability to use different tags for server and agent
containers.
When VPC is used, external DNS might not be available. This patch change
behavior to use metadata service instead of external DNS when
upstream_dns_servers is not specified.
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
According to code apiserver, scheduler, controller-manager, proxy don't
use resolution of objects they created. It's not harmful to change
policy to have external resolver.
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
Pod opbject is not reschedulable by kubernetes. It means that if node
with netchecker-server goes down, netchecker-server won't be scheduled
somewhere. This commit changes the type of netchecker-server to
Deployment, so netchecker-server will be scheduled on other nodes in
case of failures.
In kubernetes 1.6 ClusterFirstWithHostNet was added as an option. In
accordance to it kubelet will generate resolv.conf based on own
resolv.conf. However, this doesn't create 'options', thus the proper
solution requires some investigation.
This patch sets the same resolv.conf for kubelet as host
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
- Run docker run from script rather than directly from systemd target
- Refactoring styling/templates
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
Non-brekable space is 0xc2 0xa0 byte sequence in UTF-8.
To find one:
$ git grep -I -P '\xc2\xa0'
To replace with regular space:
$ git grep -l -I -P '\xc2\xa0' | xargs sed -i 's/\xc2\xa0/ /g'
This commit doesn't include changes that will overlap with commit f1c59a91a1.
The docker-network environment file masks the new values
put into /etc/systemd/system/docker.service.d/flannel-options.conf
to renumber the docker0 to work correctly with flannel.
etcd is crucial part of kubernetes cluster. Ansible restarts etcd on
reconfiguration. Backup helps operator to restore cluster manually in
case of any issues.
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
By default Calico CNI does not create any network access policies
or profiles if 'policy' is enabled in CNI config. And without any
policies/profiles network access to/from PODs is blocked.
K8s related policies are created by calico-policy-controller in
such case. So we need to start it as soon as possible, before any
real workloads.
This patch also fixes kube-api port in calico-policy-controller
yaml template.
Closes#1132
It is now possible to deactivate selected authentication methods
(basic auth, token auth) inside the cluster by adding
removing the required arguments to the Kube API Server and generating
the secrets accordingly.
The x509 authentification is currently not optional because disabling it
would affect the kubectl clients deployed on the master nodes.
Default backend is now etcd3 (was etcd2).
The migration process consists of the following steps:
* check if migration is necessary
* stop etcd on first etcd server
* run migration script
* start etcd on first etcd server
* stop kube-apiserver until configuration is updated
* update kube-apiserver
* purge old etcdv2 data
Issue #1125. Make RBAC authorization plugin work out of the box.
"When bootstrapping, superuser credentials should include the system:masters group, for example by creating a client cert with /O=system:masters. This gives those credentials full access to the API and allows an admin to then set up bindings for other users."
To use OpenID Connect Authentication beside deploying an OpenID Connect
Identity Provider it is necesarry to pass additional arguments to the Kube API Server.
These required arguments were added to the kube apiserver manifest.
- Only have ubuntu to test on
- fedora and redhat are placeholders/guesses
- the "old" package repositories seem to have the "new" CE version which is `1.13.1` based
- `docker-ce` looks like it is named as a backported `docker-engine` package in some
places
- Did not change the `defaults` version anywhere, so should work as before
- Did not point to new package repositories, as existing ones have the new packages.
By default kubedns and dnsmasq scale when installed.
Dnsmasq is no longer a daemonset. It is now a deployment.
Kubedns is no longer a replicationcluster. It is now a deployment.
Minimum replicas is two (to enable rolling updates).
Reduced memory erquirements for dnsmasq and kubedns
Until now it was not possible to add an API Loadbalancer
without an static IP Address. But certain Loadbalancers
like AWS Elastic Loadbalanacer dontt have an fixed IP address.
With this commit it is possible to add these kind of Loadbalancers
to the Kargo deployment.
The default version of Docker was switched to 1.13 in #1059. This
change also bumped ubuntu from installing docker-engine 1.13.0 to
1.13.1. This PR updates os families which had 1.13 defined, but
were using 1.13.0.
The impetus for this change is an issue running tiller 1.2.3 on
docker 1.13.0. See discussion [1][2].
[1] https://github.com/kubernetes/helm/issues/1838
[2] https://github.com/kubernetes-incubator/kargo/pull/1100
Updates based on feedback
Simplify checks for file exists
remove invalid char
Review feedback. Use regular systemd file.
Add template for docker systemd atomic
By default Calico blocks traffic from endpoints
to the host itself by using an iptables DROP
action. It could lead to a situation when service
has one alive endpoint, but pods which run on
the same node can not access it. Changed the action
to RETURN.
Kubernetes project is about to set etcdv3 as default storage engine in
1.6. This patch allows to specify particular backend for
kube-apiserver. User may force the option to etcdv3 for new environment.
At the same time if the environment uses v2 it will continue uses it
until user decides to upgrade to v3.
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
Operator can specify any port for kube-api (6443 default) This helps in
case where some pods such as Ingress require 443 exclusively.
Closes: 820
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
When a apiserver_loadbalancer_domain_name is added to the Openssl.conf
the counter gets not increased correctly. This didnt seem to have an
effect at the current kargo version.
* Leave all.yml to keep only optional vars
* Store groups' specific vars by existing group names
* Fix optional vars casted as mandatory (add default())
* Fix missing defaults for an optional IP var
* Relink group_vars for terraform to reflect changes
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
Sometimes, a sysadmin might outright delete the SELinux rpms and
delete the configuration. This causes the selinux module to fail
with
```
IOError: [Errno 2] No such file or directory: '/etc/selinux/config'\n",
"module_stdout": "", "msg": "MODULE FAILURE"}
```
This simply checks that /etc/selinux/config exists before we try
to set it Permissive.
Update from feedback
New deploy modes: scale, ha-scale, separate-scale
Creates 200 fake hosts for deployment with fake hostvars.
Useful for testing certificate generation and propagation to other
master nodes.
Updated test cases descriptions.
Migrate older inline= syntax to pure yml syntax for module args as to be consistant with most of the rest of the tasks
Cleanup some spacing in various files
Rename some files named yaml to yml for consistancy
Ansible playbook fails when tags are limited to "facts,etcd" or to
"facts". This patch allows to run ansible-playbook to gather facts only
that don't require calico/flannel/weave components to be verified. This
allows to run ansible with 'facts,bootstrap-os' or just 'facts' to
gether facts that don't require specific components.
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
Kubelet is responsible for creating symlinks from /var/lib/docker to /var/log
to make fluentd logging collector work.
However without using host's /var/log those links are invisible to fluentd.
This is done on rkt configuration too.
Based on #718 introduced by rsmitty.
Includes all roles and all options to support deployment of
new hosts in case they were added to inventory.
Main difference here is that master role is evaluated first
so that master components get upgraded first.
Fixes#694
- Refactor 'Check if bootstrap is needed' as ansible loop. This allows
to add new elements easily without refactoring. Add pip to the list.
- Refactor 'Install python 2.x' task to run once if any of rc
codes != 0. Actually, need_bootstrap is array of hashes, so map will
allow to get single array of rc statuses. So if status is not zero it
will be sorted and the last element will be get, converted to bool.
Closes: #961
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
"shell" step doesn't support check mode, which currently leads to failures,
when Ansible is being run in check mode (because Ansible doesn't run command,
assuming that command might have effect, and no "rc" or "output" is registered).
Setting "check_mode: no" allows to run those "shell" commands in check mode
(which is safe, because those shell commands doesn't have side effects).
always_run was deprecated in Ansible 2.2 and will be removed in 2.4
ansible logs contain "[DEPRECATION WARNING]: always_run is deprecated.
Use check_mode = no instead". This patch fix deprecation.
Since systemd kubelet.service has {{ ssl_ca_dirs }}, fact should be
gathered before writing kubelet.service.
Closes: #1007
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
- Exclude kubelet CPU/RAM (kube-reserved) from cgroup. It decreases a
chance of overcommitment
- Add a possibility to modify Kubelet node-status-update-frequency
- Add a posibility to configure node-monitor-grace-period,
node-monitor-period, pod-eviction-timeout for Kubernetes controller
manager
- Add Kubernetes Relaibility Documentation with recomendations for
various scenarios.
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
kubelet lost the ability to load kernel modules. This
puts that back by adding the lib/modules mount to kubelet.
The new variable kubelet_load_modules can be set to true
to enable this item. It is OFF by default.
Daemonsets cannot be simply upgraded through a single API call,
regardless of any kubectl documentation. The resource must be
purged and then recreated in order to make any changes.
Also make no-resolv unconditional again. Otherwise, we may end up in
a resolver loop. The resolver loop was the cause for the piling up
parallel queries.
Reduce election timeout to 5000ms (was 10000ms)
Raise heartbeat interval to 250ms (was 100ms)
Remove etcd cpu share (was 300)
Make etcd_cpu_limit and etcd_memory_limit optional.
Netchecker is rewritten in Go lang with some new args instead of
env variables. Also netchecker-server no longer requires kubectl
container. Updating playbooks accordingly.
- Remove weave CPU limits from .gitlab-ci.yml. Closes: #975
- Fix weave version in documentation
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
- Docker 1.12 and further don't need nsenter hack. This patch removes
it. Also, it bumps the minimal version to 1.12.
Closes#776
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
- Set recommended CPU settings
- Cleans up upgrade to weave 1.82. The original WeaveWorks
daemonset definition uses weave-net name.
- Limit DS creation to master
- Combined 2 tasks into one with better condition
When DNSMasq is configured to read its settings
from a folder ('-7' or '--conf-dir' option) it only
checks that the directory exists and doesn't fail if
it's empty. It could lead to a situation when DNSMasq
is running and handles requests, but not properly
configured, so some of queries can't be resolved.
For consistancy with kubernetes services we should use the same
hostname for nodes, which is 'ansible_hostname'.
Also fixing missed 'kube-node' in templates, Calico is installed
on 'k8s-cluster' roles, not only 'kube-node'.
Calico-rr is broken for deployments with separate k8s-master and
k8s-node roles. In order to fix it we should peer k8s-cluster
nodes with calico-rr, not just k8s-node. The same for peering
with routers.
Closes#925
ndots creates overhead as every pod creates 5 concurrent connections
that are forwarded to sky dns. Under some circumstances dnsmasq may
prevent forwarding traffic with "Maximum number of concurrent DNS
queries reached" in the logs.
This patch allows to configure the number of concurrent forwarded DNS
queries "dns-forward-max" as well as "cache-size" leaving the default
values as they were before.
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
systemctl daemon-reload should be run before when task modifies/creates
union for etcd. Otherwise etcd won't be able to start
Closes#892
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
be run by limit on each node without regard for order.
The changes make sure that all of the directories needed to do
certificate management are on the master[0] or etcd[0] node regardless
of when the playbook gets run on each node. This allows for separate
ansible playbook runs in parallel that don't have to be synchronized.
the openssl tools will fail to create signing requests because
the CN is too long. This is mainly a problem when FQDNs are used
in the inventory file.
THis will truncate the hostname for the CN field only at the
first dot. This should handle the issue for most cases.
Also remove the check for != "RedHat" when removing the dhclient hook,
as this had also to be done on other distros. Instead, check if the
dhclienthookfile is defined.
the tasks fail because selinux prevents ip-forwarding setting.
Moving the tasks around addresses two issues. Makes sure that
the correct python tools are in place before adjusting of selinux
and makes sure that ipforwarding is toggled after selinux adjustments.
"etcd_node_cert_data" variable is undefinded for "calico-rr" role.
This patch adds "calico-rr" nodes to task where "etcd_node_cert_data"
variable is registered.
Change version for calico images to v1.0.0. Also bump versions for
CNI and policy controller.
Also removing images repo and tag duplication from netchecker role
* Add restart for weave service unit
* Reuse docker_bin_dir everythere
* Limit systemd managed docker containers by CPU/RAM. Do not configure native
systemd limits due to the lack of consensus in the kernel community
requires out-of-tree kernel patches.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Current design expects users to define at least one
nameserver in the nameservers var to backup host OS DNS config
when the K8s cluster DNS service IP is not available and hosts
still have to resolve external or intranet FQDNs.
Fix undefined nameservers to fallback to the default_resolver.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Add BGP route reflectors support in order to optimize BGP topology
for deployments with Calico network plugin.
Also bump version of calico/ctl for some bug fixes.
Also place in global vars and do not repeat the kube_*_config_dir
and kube_namespace vars for better code maintainability and UX.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Do not repeat options and nameservers in the dhclient hooks.
Do not prepend nameservers for dhclient but supersede and fail back
to the upstream_dns_resolvers then default_resolver. Fixes order of
nameservers placement, which is cluster DNS ip goes always first.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
The Jinja2 filter 'reverse' returned an iterator instead of a list,
resulting in the umount task to fail.
Intead of using the reverse filter, we use 'tac' to reverse the output
of the previous task.
* For Debian/RedHat OS families (with NetworkManager/dhclient/resolvconf
optionally enabled) prepend /etc/resolv.conf with required nameservers,
options, and supersede domain and search domains via the dhclient/resolvconf
hooks.
* Drop (z)nodnsupdate dhclient hook and re-implement it to complement the
resolvconf -u command, which is distro/cloud provider specific.
Update docs as well.
* Enable network restart to apply and persist changes and simplify handlers
to rely on network restart only. This fixes DNS resolve for hostnet K8s
pods for Red Hat OS family. Skip network restart for canal/calico plugins,
unless https://github.com/projectcalico/felix/issues/1185 fixed.
* Replace linefiles line plus with_items to block mode as it's faster.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Co-authored-by: Matthew Mosesohn <mmosesohn@mirantis.com>
In order to enable offline/intranet installation cases:
* Move DNS/resolvconf configuration to preinstall role. Remove
skip_dnsmasq_k8s var as not needed anymore.
* Preconfigure DNS stack early, which may be the case when downloading
artifacts from intranet repositories. Do not configure
K8s DNS resolvers for hosts /etc/resolv.conf yet early (as they may be
not existing).
* Reconfigure K8s DNS resolvers for hosts only after kubedns/dnsmasq
was set up and before K8s apps to be created.
* Move docker install task to early stage as well and unbind it from the
etcd role's specific install path. Fix external flannel dependency on
docker role handlers. Also fix the docker restart handlers' steps
ordering to match the expected sequence (the socket then the service).
* Add default resolver fact, which is
the cloud provider specific and remove hardcoded GCE resolver.
* Reduce default ndots for hosts /etc/resolv.conf to 2. Multiple search
domains combined with high ndots values lead to poor performance of
DNS stack and make ansible workers to fail very often with the
"Timeout (12s) waiting for privilege escalation prompt:" error.
* Update docs.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Add upload tag allow users to exclude distributing images across nodes
when running with the download tag set.
Add related tags and update docs as well.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
This will allow to use '-e docker_version=1.12' in ansible playbook
execution. It's also backward-compatible and will work with floating
docker_version format in custom yaml files.
Closes#702
The variale etcd_access_addresses is used to determine
how to address communication from other roles to
the etcd cluster.
It was set to the address that ansible uses to
connect to instance ({{ item }})s and not the
the variable:
ip_access
which had already been created and could already
be overridden through the access_ip variable.
This change allows ansible to connect to a machine using
a different address than the one used to access etcd.
Override GCE sysctl in /etc/sysctl.d/99-sysctl.conf instead of
the /etc/sysctl.d/11-gce-network-security.conf. It is recreated
by GCE, f.e. if gcloud CLI invokes some security related changes,
thus losing customizations we want to be persistent.
Update cloud providers firewall requirements in calico docs.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
When running legacy calicoctl we do not specify calico hostname in
calico-node container thus we should not specify it in CNI config.
Also move 'legacy_calicoctl' set_fact task to the top.
In new `calicoctl` version nodes peering with routers is broken.
We need to use predictable node names for calico-node and the
same names in calico `bgpPeer` resources and CNI.
Fixes#655.
This is a teporary solution for long-polling idle connections to
apiserver. It will make Nginx not cut them for the duration of expected
timeout. It will also make Nginx extremely slow in realizing that there
is some issue with connectivity to apiserver as well, so it might not be
perfect permanent solution.
* Add an option to deploy K8s app to test e2e network connectivity
and cluster DNS resolve via Kubedns for nethost/simple pods
(defaults to false).
* Parametrize existing k8s apps templates with kube_namespace and
kube_config_dir instead of hardcode.
* For CoreOS, ensure nameservers from inventory to be put in the
first place to allow hostnet pods connectivity via short names
or FQDN and hostnet agents to pass as well, if netchecker
deployed.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
* Add dns_replicas, dns_memory/cpu_limit/requests vars for
dns related apps.
* When kube_log_level=4, log dnsmasq queries as well.
* Add log level control for skydns (part of kubedns app).
* Add limits/requests vars for dnsmasq (part of kubedns app) and
dnsmasq daemon set.
* Drop string defaults for kube_log_level as it is int and
is defined in the global vars as well.
* Add docs
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
When download_run_once with download_localhost is used, docker is
expected to be running on the delegate localhost. That may be not
the case for a non localhost delegate, which is the kube-master
otherwise. Then the dnsmasq role, had it been invoked early before
deployment starts, would fail because of the missing docker dependency.
* Fix that dependency on docker and do not pre download dnsmasq image
for the dnsmasq role, if download_localhost is disabled.
* Remove become: false for docker CLI invocation because that's not
the common pattern to allow users access docker CLI w/o sudo.
* Fix opt bin path hack for localhost delegate to ignore errors when
it fails with "sudo password required" otherwise.
* Describe download_run_once with download_localhost use case in docs
as well.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Use cloud-init config to replace /etc/resolv.conf with the
content for kubelet to properly configure hostnet pods.
Do not use systemd-resolved yet, see
https://coreos.com/os/docs/latest/configuring-dns.html
"Only nss-aware applications can take advantage of the
systemd-resolved cache. Notably, this means that statically
linked Go programs and programs running within Docker/rkt
will use /etc/resolv.conf only, and will not use the
systemd-resolve cache."
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
W/o this patch, the "Download containers" task may be skipped
when running on the delegate node due to wrong "when" confition.
Then it fails to upload nginx image to the nodes as well.
Fix download nginx dependency so it always can be pushed to
nodes when download_run_once is enabled.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
When setting permission for containers download/upload dir we're
using `ansible_ssh_user`. But if playbook is executed without
user being explicitly set `ansible_ssh_user` may be undefined.
In such situations dir ownership will default to `ansible_user_id`
Closes: #644
According to http://kubernetes.io/docs/user-guide/images/ :
By default, the kubelet will try to pull each image from the
specified registry. However, if the imagePullPolicy property
of the container is set to IfNotPresent or Never, then a local\
image is used (preferentially or exclusively, respectively).
Use IfNotPresent value to allow images prepared by the download
role dependencies to be effectively used by kubelet without pull
errors resulting apps to stay blocked in PullBackOff/Error state
even when there are images on the localhost exist.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Pre download all required container images as roles' deps.
Drop unused flannel-server-helper images pre download.
Improve pods creation post-install test pre downloaded busybox.
Improve logs collection script with kubectl describe, fix sudo/etcd/weave
commands.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Fix unreliable waiting for the apiserver to become ready.
Remove logfile mount to align with the rest of static pods
and because containers shall write logs to stdout only.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
in the etcd handler, the reload etcd action
was called after ansible waits for etcd to be
up, this means that the health checks which are
called immediately after fail (resulting in the etcd
role always failing and never finishing)
This patch changes the order to move the 'wait for etcd
up' resource after the 'reload etcd resource', ensuring that
the service is up before the health check is called.
* Add download_localhost for the download_run_once mode, which is
use the ansible host (a travis node for CI case) to store and
distribute containers across cluster nodes in inventory.
Defaults to false.
* Rework download_run_once logic to fix idempotency of uploading
containers.
* For Travis CI, enable docker images caching and run Travis
workers with sudo enabled as a dependency
* For Travis CI, deploy with download_localhost and download_run_once
enabled to shourten dev path drastically.
* Add compression for saved container images. Defaults to 'best'.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Co-authored-by: Aleksandr Didenko <adidenko@mirantis.com>
Add one more step (task) to containers download/upload sequence -
copy saved .tar containers to ansible host (delegate_to: localhost).
Then upload images to target nodes. It uses synchronize module so
if ansible host (localhost) is the same host as kube-master[0] then
new task causes no issues and the copy to localhost process is
basically skipped.
- Move CNI configuration from `kubernetes/node` role to
`network_plugin/canal`
- Create SSL dir for Canal and symlink etcd SSL files
- Add needed options to `canal-config` configmap
- Run flannel and calico-node containers with proper configuration
Since version 'v1.0.0-beta' calicoctl is written
in Go and its API differs from old Python based
utility. Added support of both old and new version
of the utility.
- Drop debugs from collect-info playbook
- Drop sudo from collect-info step and add target dir var (required for travis jobs)
- Label all k8s apps, including static manifests
- Add logs for K8s apps to be collected as well
- Fix upload to GCS as a public-read tarball
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
'etcd_cert_dir' variable is missing from 'kubernetes-apps/ansible'
role which breaks Calico policy controller deployment.
Also fixing calico-policy-controller.yml.
Related bug: https://github.com/ansible/ansible/issues/15405
Uses tar and register because synchronize module cannot sudo on the
remote side correctly and copy is too slow.
This patch dramatically cuts down the number of tasks to process
for cert synchronization.
* Don't push containers if not changed
* Do preinstall role only once and redistribute defaults to
corresponding roles
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Change the kubelet --hostname-override flag to use the ansible_hostname variable which should be more consistent with the value required by cloud providers
Add ansible_hostname alias to /etc/hosts when it is different from inventory_hostname to overcome node name limitations see https://github.com/kubernetes/kubernetes/issues/22770
Signed-off-by: Chad Swenson <chadswen@gmail.com>
test to change the machine type
Revert "test to change the machine type"
This reverts commit 7a91f1b5405a39bee6cb91940b09a0b0f9d3aee1.
use google dns server when no upstream dns are defined
comment upstream_dns_servers
update documentation
remove deprecated kubelet flags
Revert "remove deprecated kubelet flags"
This reverts commit 21e3b893c896d0291c36a07d0414f4cb88b8d8ac.
The requirements for network policy feature are described here [1]. In
order to enable it, appropriate configuration must be provided to the CNI
plug in and Calico policy controller must be set up. Beside that
corresponding extensions needed to be enabled in k8s API.
Now to turn on the feature user can define `enable_network_policy`
customization variable for Ansible.
[1] http://kubernetes.io/docs/user-guide/networkpolicies/
Also adds all masters by hostname and localhost/127.0.0.1 to
apiserver SSL certificate.
Includes documentation update on how localhost loadbalancer works.
Initially this was removed, but it turns out that services that
perform reverse lookups (such as MariaDB) will encounter severe
performance degredation with this disabled.
- Allow to overwrite calico cni binaries copied from hyperkube
by the custom ones.
- Fix calico-ipam deployment (it had wrong source in rsync)
- Make copy from hyperkube idempotent (use rsync instead of cp)
- Remove some orphaned comments
* Add a var for ndots (default 5) and put it hosts' /etc/resolv.conf.
* Poke kube dns container image to v1.7
* In order to apply changes to kubelet, notify it to
be restarted on changes made to /etc/resolv.conf. Ignore errors as the kubelet
may yet to be present up to the moment of the notification being processed.
* Remove unnecessary kubelet restart for master role as the node role ensures
it is up and running. Notify master static pods waiters for apiserver,
scheduler, controller-manager instead.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
"else omit" is causing problems in this expression. Replacing
it with more strict "inventory_hostname" fixes the issue and
handles `download_run_once` as expected.
Closes issue #514
- Update docs and a drawing to clarify DNS setup.
- Change order of nameservers placement to match
changes in https://github.com/kubespray/kargo/pull/501
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Change additional dnsmasq opts:
- Adjust caching size and TTL
- Disable resolve conf to not create loops
- Change dnsPolicy to default (similarly to kubedns's dnsmasq). The
ClusterFirst should not be used to not create loops
- Disable negative NXDOMAIN replies to be cached
- Make its very installation as optional step (enabled by default).
If you don't want more than 3 DNS servers, including 1 for K8s, disable
it.
- Add docs and a drawing to clarify DNS setup.
- Fix stdout logs for dnsmasq/kubedns app configs
- Add missed notifies to resolvconf -u handler
- Fix idempotency of resolvconf head file changes
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
* Add the retry_stagger var to tweak push and retry time strategies.
* Add large deployments related docs.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Move version/repo vars to download role.
Add container to download params, which overrides url/source_url,
if enabled.
Fix networking plugins download depending on kube_network_plugin.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Removed api-version from kube.py because it is deprecated.
Updating both kube.py because dnsmasq one is actually used.
Fixed name back to kubedns for checking its resource.
Move updating resolvconf to the network restart handler to
ensure changes applied to the /etc/resolv.conf.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Shorten deployment time with:
- Remove redundand roles if duplicated by a dependency and vice versa
- When a member of k8s-cluster, always install docker as a dependency
of the etcd role and drop the docker role from cluster.yaml.
- Drop etcd and node role dependencies from master role as they are
covered by the node role in k8s-cluster group as well. Copy defaults
for master from node role.
- Decouple master, node, secrets roles handlers and vars to be used w/o
cross references.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
* Ensure additional nameserver/search, if defined as vars.
* Don't backup changed dhclient hooks as they are going to be
executed by dhclient as well, which is not what we want.
* For debian OS family only:
- Rename nodnsupdate hook the resolvconf hook to be sourced always
before it.
- Ensure dhclient restarted via network restart to apply the
nodnsupdate hook.
* For rhel OS family, the fix TBD, it doesn't work the same way.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Rename nodnsupdate hook the resolvconf hook to be sourced always
before it.
Ensure dhclient restarted via network restart to apply the
nodnsupdate hook.
Ensure additional nameserver/search, if defined as vars.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
If resolvconf was installed and then removed, the file
/etc/resolvconf/resolv.conf.d/head remains in the filesystem
- change discovery of 'resolvconf' executable to check if it
can be located with 'which resolvconf' command or not.
Hyperkube from CoreOS now ships with all binaries required for
calico and flannel (but not weave). It simplifies deployment for
some network plugin scenarios to not download CNI images.
TODO: Optionally disable downloading calico to /opt/cni/bin
Creating the unit using default settings early on
and then changing it during network_plugin section
leads to too many docker restarts and duplicated code.
Reversed Wants= dependence on docker.service so it does not
restart docker when reloading systemd
Consolidated all docker restart handlers.
* Add for docker system units:
ExecReload=/bin/kill -s HUP $MAINPID
Delegate=yes
KillMode=process.
* Add missed DOCKER_OPTIONS for calico/weave docker systemd unit.
* Change Requires= to a less strict and non-faily Wants=, add missing
Wants= for After=.
* Align wants/after in a wat if Wants=foo, After= has foo as well.
* Make wants/after docker.service to ask for the docker.socket as well.
* Move "docker rm -f" commands from ExecStartPre= to ExecStopPost=.
hooks to ensure non-destructive start attempts issued by Wants=.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
etcd facts are generated in kubernetes/preinstall, so etcd nodes need
to be evaluated first before the rest of the deployment.
Moved several directory facts from kubernetes/node to
kubernetes/preinstall because they are not backward dependent.
* Add HA docs for API server.
* Add auto-evaluated internal endpoints and clarify the loadbalancer_apiserver
vars and usecases.
* Use facts for kube_apiserver to not repeat code and enable LB endpoints use.
* Use /healthz check for the wait-for apiserver.
* Use the single endpoint for kubelet instead of the list of apiservers
* Specify kube_apiserver_count to for HA layout
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Nearly the last stage of source all components to containers.
Kubectl will be called from hyperkube image.
Remaining tasks:
* Move kube_version variable to kubernetes/preinstall
* Drop placeholder download.nothing requirement
kubelet via docker
kube-apiserver as a static pod
Fixed etcd service start to be more tolerant of slow start.
Workaround for kube_version to stay in download role, but not
download an files by creating a new "nothing" download entry.
Adds new boolean configuration variable for calico network plugin
`ipip`. When it's enabled calico pool is created with '--ipip'
option (IP-over-IP encapsulation across hosts).
Also refactor pool creation tasks to simplify logic and make tasks
more readable.
* Add auto-evaluated internal endpoints and clarify the loadbalancer_apiserver
vars and usecases.
* Add loadbalancer_apiserver_localhost (default false). If enabled, override
the external LB and expect localhost:443/8080 to be new internal only frontends.
* Add kube_apiserver_multiaccess to ignore loadbalancers, and make clients
to access the apiservers as a comma-separated list of access_ip/ip/ansible ip
(a default mode). When disabled, allow clients to use the given loadbalancers.
* Define connections security mode for kube controllers, schedulers, proxies.
It is insecure be default, which is the current deployment choice.
* Rework the groups['kube-master'][0] hardcode defining the apiserver
endpoints.
* Improve grouping of vars and add facts for kube_apiserver.
* Define kube_apiserver_insecure_bind_address as a fact, add more
facts for ease of use.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Improved docker reload command to wait for etcd to be
up before proceeding. Switched reload to run restart
because it can't reload if it is not guaranteed to be
in running state.
Move set_facts to the preinstall scope, so every role
may see it. For example, network plugins to see the etcd_endpoint.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
* Enforce a etcd-proxy role to a k8s-cluster group members. This
provides an HA layout for all of the k8s cluster internal clients.
* Proxies to be run on each node in the group as a separate etcd
instances with a readwrite proxy mode and listen the given endpoint,
which is either the access_ip:2379 or the localhost:2379.
* A notion for the 'kube_etcd_multiaccess' is: ignore endpoints and
loadbalancers and use the etcd members IPs as a comma-separated
list. Otherwise, clients shall use the local endpoint provided by a
etcd-proxy instances on each etcd node. A Netwroking plugins always
use that access mode.
* Fix apiserver's etcd servers args to use the etcd_access_endpoint.
* Fix networking plugins flannel/calico to use the etcd_endpoint.
* Fix name env var for non masters to be set as well.
* Fix etcd_client_url was not used anywhere and other etcd_* facts
evaluation was duplicated in a few places.
* Define proxy modes only in the env file, if not a master. Del
an automatic proxy mode decisions for etcd nodes in init/unit scripts.
* Use Wants= instead of Requires= as "This is the recommended way to
hook start-up of one unit to the start-up of another unit"
* Make apiserver/calico Wants= etcd-proxy to keep it always up
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Co-authored-by: Matthew Mosesohn <mmosesohn@mirantis.com>
Running etcd in Docker reduces the number of individual file
downloads and services running on the host.
Note: etcd container v3.0.1 moves bindir to /usr/local/bin
Fixes: #298
* Set ETCD_INITIAL_CLUSTER_STATE from `new` to `existing`,
because parameter `new` makes sense only on cluster assembly
stage.
* If cluster exists and current node is not a part
of the cluster, add it with command `etcdctl add member name url`.
Closes kubespray/kargo/#270
This should make things a little more composable,
by making these roles meta roles that perform no
actions by default we allow each role to own its own
resources.
Kubernetes API server has an option:
```
--advertise-address=<nil>: The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
```
kargo does not set --bind-address, thus it binds to eth0, in vagrant and similar
environments this causes issues because nodes cannot talk to eachother over eth0.
This sets `--advertise-address` to `ip` if its set, otherwise the default behavior
of is persisted by using `ansible_default_ipv4.address`.
using a shared folder can cause race conditions for the download
role as it tries to download files on all the nodes to the same
shared path. This adds a flag to run the tasks in the download
role on just one node.