Commit graph

102 commits

Author SHA1 Message Date
Brad Beam
08fe61e058
Merge pull request #2071 from riverzhang/dashboard
Update dashboard version to v1.8.1
2018-01-24 20:10:05 -06:00
rong.zhang
df21fc8643 Remove initContainer 2018-01-10 12:17:17 +08:00
rong.zhang
6ed2a60978 fix run dashboard error 2018-01-04 13:13:36 +08:00
rong.zhang
5aef52e8c0 fix dashboard certs secret 2017-12-22 11:17:05 +08:00
rong.zhang
b974b144a8 Add RBAC to binding Dahsboard UI 2017-12-18 23:07:19 +08:00
rong.zhang
0771cd8599 Remove dashboard_tls_key and dashboard_tls_cert 2017-12-13 15:42:20 +08:00
rong.zhang
40edf8c6f5 Update dashboard version to v1.8.0
Update dependencies to be compatible with Kubernetes v1.8
2017-12-13 12:50:44 +08:00
Chad Swenson
b8788421d5 Support for disabling apiserver insecure port
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled).

Rework of #1937 with kubeadm support

Also, fixed an issue in `kubeadm-migrate-certs` where the old apiserver cert was copied as the kubeadm key
2017-12-05 09:13:45 -06:00
Chad Swenson
a89ee8c406 Add ability to use custom cert secret instead of init container provisioned self-signed certs 2017-11-15 10:05:52 -06:00
Chad Swenson
0c6f172e75 Kubernetes Dashboard v1.7.1 Refactor
This version required changing the previous access model for dashboard completely but it's a change for the better. Docs were updated.

* New login/auth options that use apiserver auth proxying by default
* Requires RBAC in `authorization_modes`
* Only serves over https
* No longer available at https://first_master:6443/ui until apiserver is updated with the https proxy URL:
* Can access from https://first_master:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login you will be prompted for credentials
* Or you can run 'kubectl proxy' from your local machine to access dashboard in your browser from: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
* It is recommended to access dashboard from behind a gateway that enforces an authentication token, details and other access options here: https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above
2017-11-15 10:05:48 -06:00
Matthew Mosesohn
f9b68a5d17
Revert "Support for disabling apiserver insecure port" (#1974) 2017-11-14 13:41:28 +00:00
Chad Swenson
0c7e1889e4 Support for disabling apiserver insecure port
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled). It's working, but so far I have had to:

1. Make the `uri` module "Wait for apiserver up" checks use `kube_apiserver_port` (HTTPS)
2. Add apiserver client cert/key to the "Wait for apiserver up" checks
3. Update apiserver liveness probe to use HTTPS ports
4. Set `kube_api_anonymous_auth` to true to allow liveness probe to hit apiserver's /healthz over HTTPS (livenessProbes can't use client cert/key unfortunately)
5. RBAC has to be enabled. Anonymous requests are in the `system:unauthenticated` group which is granted access to /healthz by one of RBAC's default ClusterRoleBindings. An equivalent ABAC rule could allow this as well.

Changes 1 and 2 should work for everyone, but 3, 4, and 5 require new coupling of currently independent configuration settings. So I also added a new settings check.

Options:

1. The problem goes away if you have both anonymous-auth and RBAC enabled. This is how kubeadm does it. This may be the best way to go since RBAC is already on by default but anonymous auth is not.
2. Include conditional templates to set a different liveness probe for possible combinations of `kube_apiserver_insecure_port = 0`, RBAC, and `kube_api_anonymous_auth` (won't be possible to cover every case without a guaranteed authorizer for the secure port)
3. Use basic auth headers for the liveness probe (I really don't like this, it adds a new dependency on basic auth which I'd also like to leave independently configurable, and it requires encoded passwords in the apiserver manifest)

Option 1 seems like the clear winner to me, but is there a reason we wouldn't want anonymous-auth on by default? The apiserver binary defaults anonymous-auth to true, but kubespray's default was false.
2017-11-06 14:01:10 -06:00
Andrew Greenwood
c383c7e2c1
Update kubedns image to latest 2017-10-29 21:58:05 -04:00
Matthew Mosesohn
ec53b8b66a Move cluster roles and system namespace to new role
This should be done after kubeconfig is set for admin and
before network plugins are up.
2017-10-26 14:36:05 +01:00
Spencer Smith
46cf6b77cf Merge pull request #1857 from pmontanari/patch-1
Use same kubedns_version: 1.14.5 in downloads  and kubernetes-apps/ansible roles
2017-10-25 10:05:43 -04:00
Matthew Mosesohn
33c4d64b62 Make ClusterRoleBinding to admit all nodes with right cert (#1861)
This is to work around #1856 which can occur when kubelet
hostname and resolvable hostname (or cloud instance name)
do not match.
2017-10-24 17:05:58 +01:00
pmontanari
8371a060a0 Update main.yml
Match kubedns_version with roles/download/defaults/main.yml:kubedns_version: 1.14.5
2017-10-22 23:48:51 +02:00
Matthew Mosesohn
4efb0b78fa Move CI vars out of gitlab and into var files (#1808) 2017-10-18 17:28:54 +01:00
Aivars Sterns
688e589e0c fix #1788 lock dashboard version to 1.6.3 version while 1.7.x is not working (#1805) 2017-10-17 11:04:55 +01:00
刘旭
6c98201aa4 remove kube-dns versions and images in kubernetes-apps/ansible/defaults/main.yaml (#1807) 2017-10-17 11:03:53 +01:00
Aivars Sterns
9c86da1403 Normalize tags in all places to prepare for tag fixing in future (#1739) 2017-10-05 08:43:04 +01:00
Matthew Mosesohn
bd272e0b3c Upgrade to kubeadm (#1667)
* Enable upgrade to kubeadm

* fix kubedns upgrade

* try upgrade route

* use init/upgrade strategy for kubeadm and ignore kubedns svc

* Use bin_dir for kubeadm

* delete more secrets

* fix waiting for terminating pods

* Manually enforce kube-proxy for kubeadm deploy

* remove proxy. update to kubeadm 1.8.0rc1
2017-09-26 10:38:58 +01:00
Matthew Mosesohn
b294db5aed fix apply for netchecker upgrade (#1659)
* fix apply for netchecker upgrade and graceful upgrade

* Speed up daemonset upgrades. Make check wait for ds upgrades.
2017-09-15 13:19:37 +01:00
Matthew Mosesohn
6744726089 kubeadm support (#1631)
* kubeadm support

* move k8s master to a subtask
* disable k8s secrets when using kubeadm
* fix etcd cert serial var
* move simple auth users to master role
* make a kubeadm-specific env file for kubelet
* add non-ha CI job

* change ci boolean vars to json format

* fixup

* Update create-gce.yml

* Update create-gce.yml

* Update create-gce.yml
2017-09-13 19:00:51 +01:00
Matthew Mosesohn
75b13caf0b Fix kube-apiserver status checks when changing insecure bind addr (#1633) 2017-09-09 23:41:48 +03:00
Matthew Mosesohn
649388188b Fix netchecker update side effect (#1644)
* Fix netchecker update side effect

kubectl apply should only be used on resources created
with kubectl apply. To workaround this, we should apply
the old manifest before upgrading it.

* Update 030_check-network.yml
2017-09-09 23:38:38 +03:00
Matthew Mosesohn
9fa1873a65 Add kube dashboard, enabled by default (#1643)
* Add kube dashboard, enabled by default

Also add rbac role for kube user

* Update main.yml
2017-09-09 23:38:03 +03:00
Matthew Mosesohn
d279d145d5 Fix non-rbac deployment of resources as a list (#1613)
* Use kubectl apply instead of create/replace

Disable checks for existing resources to speed up execution.

* Fix non-rbac deployment of resources as a list

* Fix autoscaler tolerations field

* set all kube resources to state=latest

* Update netchecker and weave
2017-09-05 08:23:12 +03:00
Matthew Mosesohn
660282e82f Make daemonsets upgradeable (#1606)
Canal will be covered by a separate PR
2017-09-04 11:30:01 +03:00
Matthew Mosesohn
a3e6896a43 Add RBAC support for canal (#1604)
Refactored how rbac_enabled is set
Added RBAC to ubuntu-canal-ha CI job
Added rbac for calico policy controller
2017-09-04 11:29:40 +03:00
Matthew Mosesohn
6bb3463e7c Enable scheduling of critical pods and network plugins on master
Added toleration to DNS, netchecker, fluentd, canal, and
calico policy.

Also small fixes to make yamllint pass.
2017-08-24 10:41:17 +01:00
Brad Beam
8b151d12b9 Adding yamllinter to ci steps (#1556)
* Adding yaml linter to ci check

* Minor linting fixes from yamllint

* Changing CI to install python pkgs from requirements.txt

- adding in a secondary requirements.txt for tests
- moving yamllint to tests requirements
2017-08-24 12:09:52 +03:00
Matthew Mosesohn
df28db0066 Fix cert and netchecker upgrade issues (#1543)
* Bump tag for upgrade CI, fix netchecker upgrade

netchecker-server was changed from pod to deployment, so
we need an upgrade hook for it.

CI now uses v2.1.1 as a basis for upgrade.

* Fix upgrades for certs from non-rbac to rbac
2017-08-18 15:46:22 +03:00
Brad Beam
af007c7189 Fixing netchecker-server type - pod => deployment (#1509) 2017-08-14 18:43:56 +03:00
jwfang
789910d8eb remote unused netchecker-agent-hostnet-ds.j2 2017-07-17 19:29:59 +08:00
jwfang
a8e6a0763d run netchecker-server with list pods 2017-07-17 19:29:59 +08:00
jwfang
e1386ba604 only patch system:kube-dns role for old dns 2017-07-17 19:29:59 +08:00
jwfang
83deecb9e9 Revert "no need to patch system:kube-dns"
This reverts commit c2ea8c588aa5c3879f402811d3599a7bb3ccab24.
2017-07-17 19:29:59 +08:00
jwfang
d8dcb8f6e0 no need to patch system:kube-dns 2017-07-17 19:29:59 +08:00
jwfang
092bf07cbf basic rbac support 2017-07-17 19:29:59 +08:00
Spencer Smith
bba555bb08 Merge pull request #1346 from Starefossen/patch-1
Set kubedns minimum replicas to 2
2017-07-06 09:14:11 -04:00
Hans Kristian Flaatten
38f5d1b18e Set kubedns minimum replicas to 2 2017-07-04 16:58:16 +02:00
Chad Swenson
8467bce2a6 Fix inconsistent kubedns version and parameterize kubedns autoscaler image vars 2017-06-27 10:19:31 -05:00
Seungkyu Ahn
d5516a4ca9 Make kubedns up to date
Update kube-dns version to 1.14.2
https://github.com/kubernetes/kubernetes/pull/45684
2017-06-27 00:57:29 +00:00
Gregory Storme
266ca9318d Use the kube_apiserver_insecure_port variable instead of static 8080 2017-06-12 09:20:59 +02:00
Spencer Smith
efa2dff681 remove conditional 2017-05-12 17:16:49 -04:00
Spencer Smith
31a7b7d24e default to kubedns and set nxdomain in kubedns deployment if that's the dns_mode 2017-05-12 15:57:24 -04:00
Aleksandr Didenko
883ba7aa90 Add support for different tags for netcheck containers
Replace 'netcheck_tag' with 'netcheck_version' and add additional
'netcheck_server_tag' and 'netcheck_agent_tag' config options to
provide ability to use different tags for server and agent
containers.
2017-04-27 17:15:28 +02:00
Aleksey Kasatkin
2638ab98ad add MY_NODE_NAME variable into netchecker-agent environment 2017-04-24 17:19:42 +03:00
Spencer Smith
5c4980c6e0 Merge pull request #1231 from holser/fix_netchecker-server
Reschedule netchecker-server in case of HW failure.
2017-04-14 10:50:07 -04:00