Commit graph

62 commits

Author SHA1 Message Date
Andreas Krüger
17f07e2613 Enable DNS AutoScaler for CoreDNS (#3707)
* Enable AutoScaler for CoreDNS

* Only use one template for dns autoscaler

* Rename a few variables for replicas and minimum pods

* Rename a few variables for replicas and minimum pods

* Remove replicas to make autoscale work

* Cleanup kubedns-autoscaler as it has been renamed
2018-11-15 01:28:03 -08:00
Ryler Hockenbury
d8e9b0f675 Netchecker version and namespace (#3705)
* Revert netchecker image and version

* Create namespace for netchecker

* Remove extra slashes
2018-11-14 09:27:45 -08:00
Erwan Miran
7bec169d58 Fix ansible syntax to avoid ansible deprecation warnings (#3512)
* failed

* version_compare

* succeeded

* skipped

* success

* version_compare becomes version since ansible 2.5

* ansible minimal version updated in doc and spec

* last version_compare
2018-10-16 15:33:30 -07:00
Matthew Mosesohn
4b7d59224d Fix tag based deploy of apps by skipping kubeadm dns tasks (#3462) 2018-10-08 01:22:57 -07:00
Erwan Miran
80cfeea957 psp, roles and rbs for PodSecurityPolicy when podsecuritypolicy_enabled is true 2018-08-22 18:16:13 +02:00
Wong Hoi Sing Edison
c3b3572025 Always create service account even rbac_enabled = false 2018-08-22 11:41:29 +08:00
Erwan Miran
494ff9522b j2 extension should only be used for template filename, not target file on remote host 2018-08-09 11:29:45 +02:00
Mathieu Herbert
d285565475 Add tags for coredns and kubedns 2018-08-07 20:55:38 +02:00
Matthew Mosesohn
03bcfa7ff5
Stop templating kube-system namespace and creating it (#2545)
Kubernetes makes this namespace automatically, so there is
no need for kubespray to manage it.
2018-03-30 14:29:13 +03:00
Kuldip Madnani
daeeae1a91 Added retries in pre-upgrade.yml and retries while applying kube-dns.yml (#2553)
* Added retries in pre-upgrade.yml and retries while applying kube-dns.yml

* Removed trailing spaces
2018-03-29 11:37:32 -05:00
woopstar
e40368ae2b Add CoreDNS support with various fixes
Added CoreDNS to downloads

Updated with labels. Should now work without RBAC too

Fix DNS settings on hosts

Rename CoreDNS service from kube-dns to coredns

Add rotate based on http://edgeofsanity.net/rant/2017/12/20/systemd-resolved-is-broken.html

Updated docs with CoreDNS info

Added labels and fixed minor settings from official yaml file: https://github.com/kubernetes/kubernetes/blob/release-1.9/cluster/addons/dns/coredns.yaml.sed

Added a secondary deployment and secondary service ip. This is to mitigate dns timeouts and create high resitency for failures. See discussion at 'https://github.com/coreos/coreos-kubernetes/issues/641#issuecomment-281174806'

Set dns list correct. Thanks to @whereismyjetpack

Only download KubeDNS or CoreDNS if selected

Move dns cleanup to its own file and import tasks based on dns mode

Fix install of KubeDNS when dnsmask_kubedns mode is selected

Add new dns option coredns_dual for dual stack deployment. Added variable to configure replicas deployed. Updated docs for dual stack deployment. Removed rotate option in resolv.conf.

Run DNS manifests for CoreDNS and KubeDNS

Set skydns servers on dual stack deployment

Use only one template for CoreDNS dual deployment

Set correct cluster ip for the dns server
2018-03-16 21:51:37 +01:00
Antoine Legrand
37cfd289d8
Merge pull request #2248 from hswong3i/dashboard.yml.j2
Dashboard template should not suffix with .yml.j2
2018-02-06 11:25:02 +01:00
Wong Hoi Sing Edison
4ad53339f6 KubeDNS template should not suffix with .yml.j2 2018-02-05 16:26:54 +08:00
Wong Hoi Sing Edison
a4d3da6a8e Dashboard template should not suffix with .yml.j2 2018-02-05 16:18:21 +08:00
Matthew Mosesohn
dc6a17e092
Use include/import tasks (#2192)
import_tasks will consume far less memory, so it should be
used whenever it is compatible.
2018-01-29 14:37:48 +03:00
Chad Swenson
b8788421d5 Support for disabling apiserver insecure port
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled).

Rework of #1937 with kubeadm support

Also, fixed an issue in `kubeadm-migrate-certs` where the old apiserver cert was copied as the kubeadm key
2017-12-05 09:13:45 -06:00
Chad Swenson
a89ee8c406 Add ability to use custom cert secret instead of init container provisioned self-signed certs 2017-11-15 10:05:52 -06:00
Chad Swenson
0c6f172e75 Kubernetes Dashboard v1.7.1 Refactor
This version required changing the previous access model for dashboard completely but it's a change for the better. Docs were updated.

* New login/auth options that use apiserver auth proxying by default
* Requires RBAC in `authorization_modes`
* Only serves over https
* No longer available at https://first_master:6443/ui until apiserver is updated with the https proxy URL:
* Can access from https://first_master:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login you will be prompted for credentials
* Or you can run 'kubectl proxy' from your local machine to access dashboard in your browser from: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
* It is recommended to access dashboard from behind a gateway that enforces an authentication token, details and other access options here: https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above
2017-11-15 10:05:48 -06:00
Matthew Mosesohn
f9b68a5d17
Revert "Support for disabling apiserver insecure port" (#1974) 2017-11-14 13:41:28 +00:00
Chad Swenson
0c7e1889e4 Support for disabling apiserver insecure port
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled). It's working, but so far I have had to:

1. Make the `uri` module "Wait for apiserver up" checks use `kube_apiserver_port` (HTTPS)
2. Add apiserver client cert/key to the "Wait for apiserver up" checks
3. Update apiserver liveness probe to use HTTPS ports
4. Set `kube_api_anonymous_auth` to true to allow liveness probe to hit apiserver's /healthz over HTTPS (livenessProbes can't use client cert/key unfortunately)
5. RBAC has to be enabled. Anonymous requests are in the `system:unauthenticated` group which is granted access to /healthz by one of RBAC's default ClusterRoleBindings. An equivalent ABAC rule could allow this as well.

Changes 1 and 2 should work for everyone, but 3, 4, and 5 require new coupling of currently independent configuration settings. So I also added a new settings check.

Options:

1. The problem goes away if you have both anonymous-auth and RBAC enabled. This is how kubeadm does it. This may be the best way to go since RBAC is already on by default but anonymous auth is not.
2. Include conditional templates to set a different liveness probe for possible combinations of `kube_apiserver_insecure_port = 0`, RBAC, and `kube_api_anonymous_auth` (won't be possible to cover every case without a guaranteed authorizer for the secure port)
3. Use basic auth headers for the liveness probe (I really don't like this, it adds a new dependency on basic auth which I'd also like to leave independently configurable, and it requires encoded passwords in the apiserver manifest)

Option 1 seems like the clear winner to me, but is there a reason we wouldn't want anonymous-auth on by default? The apiserver binary defaults anonymous-auth to true, but kubespray's default was false.
2017-11-06 14:01:10 -06:00
Matthew Mosesohn
ec53b8b66a Move cluster roles and system namespace to new role
This should be done after kubeconfig is set for admin and
before network plugins are up.
2017-10-26 14:36:05 +01:00
Matthew Mosesohn
33c4d64b62 Make ClusterRoleBinding to admit all nodes with right cert (#1861)
This is to work around #1856 which can occur when kubelet
hostname and resolvable hostname (or cloud instance name)
do not match.
2017-10-24 17:05:58 +01:00
Aivars Sterns
9c86da1403 Normalize tags in all places to prepare for tag fixing in future (#1739) 2017-10-05 08:43:04 +01:00
Matthew Mosesohn
bd272e0b3c Upgrade to kubeadm (#1667)
* Enable upgrade to kubeadm

* fix kubedns upgrade

* try upgrade route

* use init/upgrade strategy for kubeadm and ignore kubedns svc

* Use bin_dir for kubeadm

* delete more secrets

* fix waiting for terminating pods

* Manually enforce kube-proxy for kubeadm deploy

* remove proxy. update to kubeadm 1.8.0rc1
2017-09-26 10:38:58 +01:00
Matthew Mosesohn
b294db5aed fix apply for netchecker upgrade (#1659)
* fix apply for netchecker upgrade and graceful upgrade

* Speed up daemonset upgrades. Make check wait for ds upgrades.
2017-09-15 13:19:37 +01:00
Matthew Mosesohn
6744726089 kubeadm support (#1631)
* kubeadm support

* move k8s master to a subtask
* disable k8s secrets when using kubeadm
* fix etcd cert serial var
* move simple auth users to master role
* make a kubeadm-specific env file for kubelet
* add non-ha CI job

* change ci boolean vars to json format

* fixup

* Update create-gce.yml

* Update create-gce.yml

* Update create-gce.yml
2017-09-13 19:00:51 +01:00
Matthew Mosesohn
75b13caf0b Fix kube-apiserver status checks when changing insecure bind addr (#1633) 2017-09-09 23:41:48 +03:00
Matthew Mosesohn
649388188b Fix netchecker update side effect (#1644)
* Fix netchecker update side effect

kubectl apply should only be used on resources created
with kubectl apply. To workaround this, we should apply
the old manifest before upgrading it.

* Update 030_check-network.yml
2017-09-09 23:38:38 +03:00
Matthew Mosesohn
9fa1873a65 Add kube dashboard, enabled by default (#1643)
* Add kube dashboard, enabled by default

Also add rbac role for kube user

* Update main.yml
2017-09-09 23:38:03 +03:00
Matthew Mosesohn
d279d145d5 Fix non-rbac deployment of resources as a list (#1613)
* Use kubectl apply instead of create/replace

Disable checks for existing resources to speed up execution.

* Fix non-rbac deployment of resources as a list

* Fix autoscaler tolerations field

* set all kube resources to state=latest

* Update netchecker and weave
2017-09-05 08:23:12 +03:00
Matthew Mosesohn
660282e82f Make daemonsets upgradeable (#1606)
Canal will be covered by a separate PR
2017-09-04 11:30:01 +03:00
Brad Beam
8b151d12b9 Adding yamllinter to ci steps (#1556)
* Adding yaml linter to ci check

* Minor linting fixes from yamllint

* Changing CI to install python pkgs from requirements.txt

- adding in a secondary requirements.txt for tests
- moving yamllint to tests requirements
2017-08-24 12:09:52 +03:00
Matthew Mosesohn
df28db0066 Fix cert and netchecker upgrade issues (#1543)
* Bump tag for upgrade CI, fix netchecker upgrade

netchecker-server was changed from pod to deployment, so
we need an upgrade hook for it.

CI now uses v2.1.1 as a basis for upgrade.

* Fix upgrades for certs from non-rbac to rbac
2017-08-18 15:46:22 +03:00
Brad Beam
af007c7189 Fixing netchecker-server type - pod => deployment (#1509) 2017-08-14 18:43:56 +03:00
jwfang
a8e6a0763d run netchecker-server with list pods 2017-07-17 19:29:59 +08:00
jwfang
e1386ba604 only patch system:kube-dns role for old dns 2017-07-17 19:29:59 +08:00
jwfang
83deecb9e9 Revert "no need to patch system:kube-dns"
This reverts commit c2ea8c588aa5c3879f402811d3599a7bb3ccab24.
2017-07-17 19:29:59 +08:00
jwfang
d8dcb8f6e0 no need to patch system:kube-dns 2017-07-17 19:29:59 +08:00
jwfang
092bf07cbf basic rbac support 2017-07-17 19:29:59 +08:00
Gregory Storme
266ca9318d Use the kube_apiserver_insecure_port variable instead of static 8080 2017-06-12 09:20:59 +02:00
Seungkyu Ahn
d5516a4ca9 Make kubedns up to date
Update kube-dns version to 1.14.2
https://github.com/kubernetes/kubernetes/pull/45684
2017-06-27 00:57:29 +00:00
Sergii Golovatiuk
45044c2d75 Reschedule netchecker-server in case of HW failure.
Pod opbject is not reschedulable by kubernetes. It means that if node
with netchecker-server goes down, netchecker-server won't be scheduled
somewhere. This commit changes the type of netchecker-server to
Deployment, so netchecker-server will be scheduled on other nodes in
case of failures.
2017-04-14 10:49:16 +02:00
Sergii Golovatiuk
2670eefcd4 Refactoring resolv.conf
- Renaming templates for netchecker
- Add dnsPolicy: ClusterFirstWithHostNet to kube-proxy

Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-04-05 09:28:01 +02:00
Sergii Golovatiuk
1cfe0beac0 Set ClusterFirstWithHostNet for Pods with hostnetwork: true
In kubernetes 1.6 ClusterFirstWithHostNet was added as an option. In
accordance to it kubelet will generate resolv.conf based on own
resolv.conf. However, this doesn't create 'options', thus the proper
solution requires some investigation.

This patch sets the same resolv.conf for kubelet as host

Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-04-04 16:34:13 +02:00
Aleksandr Didenko
3a39904011 Move calico-policy-controller into separate role
By default Calico CNI does not create any network access policies
or profiles if 'policy' is enabled in CNI config. And without any
policies/profiles network access to/from PODs is blocked.

K8s related policies are created by calico-policy-controller in
such case. So we need to start it as soon as possible, before any
real workloads.

This patch also fixes kube-api port in calico-policy-controller
yaml template.

Closes #1132
2017-03-17 11:21:52 +01:00
Matthew Mosesohn
9cb12cf250 Add autoscalers for dnsmasq and kubedns
By default kubedns and dnsmasq scale when installed.
Dnsmasq is no longer a daemonset. It is now a deployment.
Kubedns is no longer a replicationcluster. It is now a deployment.
Minimum replicas is two (to enable rolling updates).

Reduced memory erquirements for dnsmasq and kubedns
2017-03-02 13:44:22 +03:00
Andrew Greenwood
ca9ea097df Cleanup legacy syntax, spacing, files all to yml
Migrate older inline= syntax to pure yml syntax for module args as to be consistant with most of the rest of the tasks
Cleanup some spacing in various files
Rename some files named yaml to yml for consistancy
2017-02-17 16:22:34 -05:00
Matthew Mosesohn
3c713a3f53 Fix upgrade for all daemonset type resources
Daemonsets cannot be simply upgraded through a single API call,
regardless of any kubectl documentation. The resource must be
purged and then recreated in order to make any changes.
2017-02-08 18:16:00 +03:00
Greg Althaus
61dab8dc0b Should only check for api-server running on the master.
If this runs on other nodes, it will fail the playbook.
2017-01-17 15:57:34 -06:00
Alexander Block
1d2a18b355 Introduce dns_mode and resolvconf_mode and implement docker_dns mode
Also update reset.yml to do more dns/network related cleanup.
2017-01-05 23:38:51 +01:00