Commit graph

139 commits

Author SHA1 Message Date
Kuldip Madnani
36898a2c39 Adding pod priority for all the components. (#3361)
* Changes to assign pod priority to kube components.

* Removed the boolean flag pod_priority_assignment

* Created new priorityclass k8s-cluster-critical

* Created new priorityclass k8s-cluster-critical

* Fixed the trailing spaces

* Fixed the trailing spaces

* Added kube version check while creating Priority Class k8s-cluster-critical

* Moved k8s-cluster-critical.yml

* Moved k8s-cluster-critical.yml to kube_config_dir
2018-09-25 07:50:22 -07:00
k8s-ci-robot
7f91f6e034
Merge pull request #3287 from Kami-no/coredns_metrics
Monitor CoreDNS over svc
2018-09-16 23:39:59 -07:00
k8s-ci-robot
e6a2e34dd1
Merge pull request #3315 from riverzhang/upgrade-kubedns
Upgrade kubedns to 1.14.11
2018-09-15 02:08:20 -07:00
rongzhang
934d92f09c Upgrade kubedns to 1.14.11 2018-09-15 15:22:38 +08:00
Zinin D.A
29c7775ea1 Monitor CoreDNS over svc 2018-09-12 10:24:15 +03:00
Erwan Miran
e24b1220a0 Fix wrong sa name in crb when psp is enabled 2018-09-11 15:04:55 +02:00
k8s-ci-robot
db11394711
Merge pull request #3200 from pablodav/feature/k8s_win_v1.11
Required support to start working on windows node support
2018-09-03 04:51:23 -07:00
Pablo Estigarribia
7cbe3c2171 ensure there is pin priority for docker package to avoid upgrade of docker to incompatible version
ensure there is pin priority for docker package to avoid upgrade of docker to incompatible version

remove empty when line

ensure there is pin priority for docker package to avoid upgrade of docker to incompatible version

force kubeadm upgrade due to failure without --force flag

ensure there is pin priority for docker package to avoid upgrade of docker to incompatible version

added nodeSelector to have compatibility with hybrid cluster with win nodes, also fix for download with missing container type

fixes in syntax and LF for newline in files

fix on yamllint check

ensure there is pin priority for docker package to avoid upgrade of docker to incompatible version

some cleanup for innecesary lines

remove conditions for nodeselector
2018-09-02 12:47:06 -03:00
Antoine Legrand
2f1fe44762 update images to use arch 2018-08-31 13:45:08 +02:00
Arslanbekov Denis
fe1e758856 Up dashboard version to 1.10.0 2018-08-28 14:10:19 +03:00
Erwan Miran
a6a14e7f77 create the service account and roles even if the rbac is not enabled. it will just be ignored 2018-08-22 18:17:11 +02:00
Erwan Miran
80cfeea957 psp, roles and rbs for PodSecurityPolicy when podsecuritypolicy_enabled is true 2018-08-22 18:16:13 +02:00
Wong Hoi Sing Edison
c3b3572025 Always create service account even rbac_enabled = false 2018-08-22 11:41:29 +08:00
Andreas Krüger
b3e32c1393
Merge pull request #3094 from hedayat/master
Add --dns-loop-detect to dnsmasq used in kube-dns
2018-08-20 09:27:15 +02:00
rongzhang
35efc387c4 Fix pull dns image error 2018-08-19 22:47:17 +08:00
Antoine Legrand
26bf719a02
Merge branch 'master' into multi-arch-support 2018-08-17 16:35:50 +02:00
Chad Swenson
3a85a2f81c
Merge pull request #3080 from mirwan/netchecker_template_rendering_filename
Netchecker manifests should not have j2 extension
2018-08-14 13:24:16 -05:00
Hedayat Vatankhah
c0221c2e72 Add --dns-loop-detect to dnsmasq used in kube-dns
It prevents DNS loops when host's DNS server is a localhost DNS server,
or when DNS server of cluster is also added as an upstream DNS server
2018-08-12 20:36:33 +04:30
Cédric de Saint Martin
e3dcd96301 kubedns & kubedns-autoscaler: Stick to master nodes. (#2909)
* kubedns & kubedns-autoscaler: Stick to master nodes.

 - Tolerate only master nodes and not any NoSchedule taint
 - Pods are on different nodes
 - Pods are required to be on a master node.

* kubedns: use soft nodeAffinity.

Prefer to be on a master node, don't require.

* coredns: Stick to (different) master nodes.

     - Pods are on different nodes
     - Pods are preferred to be on a master node.
2018-08-09 10:42:53 -05:00
Erwan Miran
494ff9522b j2 extension should only be used for template filename, not target file on remote host 2018-08-09 11:29:45 +02:00
Mathieu Herbert
d285565475 Add tags for coredns and kubedns 2018-08-07 20:55:38 +02:00
DBLaci
b61c64a8ea token-ttl default value is int in seconds 2018-07-19 12:15:47 +02:00
DBLaci
cb91003cea dashboard_token_ttl option override possibility with default 2018-07-13 15:26:18 +02:00
Derek Lemon
1e98e8444e Using dns domain instead of cluster name for coredns, incase they differ 2018-06-13 18:52:35 +00:00
Di Xu
1081f620d2 add support for non-amd64 arch gcr.io images
Currently all the gcr.io images used in kubespray can only run on x86.
Also gcr.io has not fully support multi-arch docker images.

Add extra var "image_arch" (default is amd64) to support running other
platforms, like arm64.

Change-Id: I8e1c9af533c021cb96ade291a1ce58773b40e271
2018-06-05 17:29:02 +08:00
Julien Girardin
f88cd27686 Add dashboard url as part of kubectl cluster-info output 2018-05-28 11:46:11 +02:00
rongzhang
742a8782dd Bump kube-dns to 1.14.10
Upgrade kube-dns to 1.14.10
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns
2018-05-15 03:29:10 +00:00
Cédric de Saint Martin
44cb126e7d Update netchecker to v1.2.2.
Using official image from mirantis at dockerhub.
2018-04-24 09:13:56 +02:00
Matthew Mosesohn
03bcfa7ff5
Stop templating kube-system namespace and creating it (#2545)
Kubernetes makes this namespace automatically, so there is
no need for kubespray to manage it.
2018-03-30 14:29:13 +03:00
Kuldip Madnani
daeeae1a91 Added retries in pre-upgrade.yml and retries while applying kube-dns.yml (#2553)
* Added retries in pre-upgrade.yml and retries while applying kube-dns.yml

* Removed trailing spaces
2018-03-29 11:37:32 -05:00
woopstar
e40368ae2b Add CoreDNS support with various fixes
Added CoreDNS to downloads

Updated with labels. Should now work without RBAC too

Fix DNS settings on hosts

Rename CoreDNS service from kube-dns to coredns

Add rotate based on http://edgeofsanity.net/rant/2017/12/20/systemd-resolved-is-broken.html

Updated docs with CoreDNS info

Added labels and fixed minor settings from official yaml file: https://github.com/kubernetes/kubernetes/blob/release-1.9/cluster/addons/dns/coredns.yaml.sed

Added a secondary deployment and secondary service ip. This is to mitigate dns timeouts and create high resitency for failures. See discussion at 'https://github.com/coreos/coreos-kubernetes/issues/641#issuecomment-281174806'

Set dns list correct. Thanks to @whereismyjetpack

Only download KubeDNS or CoreDNS if selected

Move dns cleanup to its own file and import tasks based on dns mode

Fix install of KubeDNS when dnsmask_kubedns mode is selected

Add new dns option coredns_dual for dual stack deployment. Added variable to configure replicas deployed. Updated docs for dual stack deployment. Removed rotate option in resolv.conf.

Run DNS manifests for CoreDNS and KubeDNS

Set skydns servers on dual stack deployment

Use only one template for CoreDNS dual deployment

Set correct cluster ip for the dns server
2018-03-16 21:51:37 +01:00
Dmitry Vlasov
977e7ae105 remove obsolete init image, bump dashboard version 1.8.1 -> 1.8.3 2018-02-28 12:52:59 +03:00
Antoine Legrand
37cfd289d8
Merge pull request #2248 from hswong3i/dashboard.yml.j2
Dashboard template should not suffix with .yml.j2
2018-02-06 11:25:02 +01:00
Wong Hoi Sing Edison
4ad53339f6 KubeDNS template should not suffix with .yml.j2 2018-02-05 16:26:54 +08:00
Wong Hoi Sing Edison
a4d3da6a8e Dashboard template should not suffix with .yml.j2 2018-02-05 16:18:21 +08:00
RongZhang
3846384d56 Bump kube-dns to 1.14.8 (#2204)
Bump kube-dns to 1.14.8
2018-01-30 19:23:37 +03:00
Matthew Mosesohn
dc6a17e092
Use include/import tasks (#2192)
import_tasks will consume far less memory, so it should be
used whenever it is compatible.
2018-01-29 14:37:48 +03:00
Brad Beam
08fe61e058
Merge pull request #2071 from riverzhang/dashboard
Update dashboard version to v1.8.1
2018-01-24 20:10:05 -06:00
rong.zhang
df21fc8643 Remove initContainer 2018-01-10 12:17:17 +08:00
rong.zhang
6ed2a60978 fix run dashboard error 2018-01-04 13:13:36 +08:00
rong.zhang
5aef52e8c0 fix dashboard certs secret 2017-12-22 11:17:05 +08:00
rong.zhang
b974b144a8 Add RBAC to binding Dahsboard UI 2017-12-18 23:07:19 +08:00
rong.zhang
0771cd8599 Remove dashboard_tls_key and dashboard_tls_cert 2017-12-13 15:42:20 +08:00
rong.zhang
40edf8c6f5 Update dashboard version to v1.8.0
Update dependencies to be compatible with Kubernetes v1.8
2017-12-13 12:50:44 +08:00
Chad Swenson
b8788421d5 Support for disabling apiserver insecure port
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled).

Rework of #1937 with kubeadm support

Also, fixed an issue in `kubeadm-migrate-certs` where the old apiserver cert was copied as the kubeadm key
2017-12-05 09:13:45 -06:00
Chad Swenson
a89ee8c406 Add ability to use custom cert secret instead of init container provisioned self-signed certs 2017-11-15 10:05:52 -06:00
Chad Swenson
0c6f172e75 Kubernetes Dashboard v1.7.1 Refactor
This version required changing the previous access model for dashboard completely but it's a change for the better. Docs were updated.

* New login/auth options that use apiserver auth proxying by default
* Requires RBAC in `authorization_modes`
* Only serves over https
* No longer available at https://first_master:6443/ui until apiserver is updated with the https proxy URL:
* Can access from https://first_master:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login you will be prompted for credentials
* Or you can run 'kubectl proxy' from your local machine to access dashboard in your browser from: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
* It is recommended to access dashboard from behind a gateway that enforces an authentication token, details and other access options here: https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above
2017-11-15 10:05:48 -06:00
Matthew Mosesohn
f9b68a5d17
Revert "Support for disabling apiserver insecure port" (#1974) 2017-11-14 13:41:28 +00:00
Chad Swenson
0c7e1889e4 Support for disabling apiserver insecure port
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled). It's working, but so far I have had to:

1. Make the `uri` module "Wait for apiserver up" checks use `kube_apiserver_port` (HTTPS)
2. Add apiserver client cert/key to the "Wait for apiserver up" checks
3. Update apiserver liveness probe to use HTTPS ports
4. Set `kube_api_anonymous_auth` to true to allow liveness probe to hit apiserver's /healthz over HTTPS (livenessProbes can't use client cert/key unfortunately)
5. RBAC has to be enabled. Anonymous requests are in the `system:unauthenticated` group which is granted access to /healthz by one of RBAC's default ClusterRoleBindings. An equivalent ABAC rule could allow this as well.

Changes 1 and 2 should work for everyone, but 3, 4, and 5 require new coupling of currently independent configuration settings. So I also added a new settings check.

Options:

1. The problem goes away if you have both anonymous-auth and RBAC enabled. This is how kubeadm does it. This may be the best way to go since RBAC is already on by default but anonymous auth is not.
2. Include conditional templates to set a different liveness probe for possible combinations of `kube_apiserver_insecure_port = 0`, RBAC, and `kube_api_anonymous_auth` (won't be possible to cover every case without a guaranteed authorizer for the secure port)
3. Use basic auth headers for the liveness probe (I really don't like this, it adds a new dependency on basic auth which I'd also like to leave independently configurable, and it requires encoded passwords in the apiserver manifest)

Option 1 seems like the clear winner to me, but is there a reason we wouldn't want anonymous-auth on by default? The apiserver binary defaults anonymous-auth to true, but kubespray's default was false.
2017-11-06 14:01:10 -06:00
Andrew Greenwood
c383c7e2c1
Update kubedns image to latest 2017-10-29 21:58:05 -04:00