Compare commits

...

886 commits

Author SHA1 Message Date
douzeb 96f5d1ca87 Add c12s sample inventory and deploy script 2022-12-21 23:34:33 +01:00
douzeb 7cb7887234 increase max ansible version 2022-12-21 18:57:38 +01:00
yanggang 4728739597
follow containerd1.16.13 and 1.16.14 (#9585)
Signed-off-by: yanggang <gang.yang@daocloud.io>

Signed-off-by: yanggang <gang.yang@daocloud.io>
2022-12-21 00:35:28 -08:00
Kay Yan fc0d58ff48
fix-missing-control-plane-taint (#9592) 2022-12-19 15:57:43 -08:00
janaurka 491e260d20
Feature/add flannel wireguard encryption backend as option (#9583)
* feat(): Add wireguard backend to flannel cni

As described in the flannel docs:
https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#wireguard

This does not support optional configuration methods like:
- setting a psk (will be autogenerated by default)
- chang listening ports
- change mode (defaults to 'separate')
- change PersistentKeepaliveInterval (defaults to 0)

* Add supported backends to flannel docs

* Fix markdown in docs
2022-12-18 15:39:43 -08:00
Kenichi Omichi a132733b2d
Merge pull request #9581 from Xieql/fix-annotation-typo
Fix annotation typo
2022-12-17 11:03:05 +09:00
Kenichi Omichi b377dbb96f
Merge pull request #9579 from HassanAbouelela/fix-kep-0030
Fix Broken KEP Link In Docs
2022-12-16 09:35:28 +09:00
Xieql c4d753c931 Fix annotation typo
Signed-off-by: Xieql <xieqianglong@huawei.com>
2022-12-15 18:40:30 +08:00
Lukas Najman ee3b7c5da5
Use the correct api version and resourcer type. The current values work but do not match the documentation, which can be confusing. (#9575) 2022-12-15 01:21:35 -08:00
Florian Ruynat dcc267f6f4
Remove include task in play, deprecated in favor of import_playbook (#9576) 2022-12-15 01:13:35 -08:00
Robin Wallace ccf60fc9ca
upcloud: Delete default reclaim policy (#9574) 2022-12-14 16:15:34 -08:00
Kay Yan a38a3e7ddf
upgrade-calico-v3.24.5 (#9580) 2022-12-14 09:21:36 -08:00
Hassan Abouelela beb4aa52ea
Fix Broken KEP Link In Docs
Fix a broken link to KEP 0030 in the dns-stack docs,
which has been merged into KEP 1024.
2022-12-14 13:54:05 +03:00
Aveline f7d0fb9ab2
rename ansible groups to use _ instead of (#9569) 2022-12-13 21:19:34 -08:00
Book shu ff331f4eba
support flannel dual stack (#9564) 2022-12-13 20:47:35 -08:00
JSpon 94eae6a8dc
adjust calico-kube-controller to use hostNetwork when using etcd as datastore (#9573) 2022-12-13 20:41:34 -08:00
yanggang f8d6b54dbb
Add hashes for 1.25.5, 1.24.9, 1.23.15 and make v1.25.5 default (#9557)
Signed-off-by: yanggang <gang.yang@daocloud.io>

Signed-off-by: yanggang <gang.yang@daocloud.io>
2022-12-11 16:45:33 -08:00
emiran-orange 67c4f2d95e
Add XDG related Helm paths to be removed (#9561) 2022-12-10 03:59:40 -08:00
Mohamed Zaian 03fefa8933
[feat] Upgrade metrics server to v0.6.2 (#9554) 2022-12-10 03:55:40 -08:00
Fredrik Liv c8ec77a734
[containerd] Add config for unpriviledged ports and icmp (#9517)
* [containerd] Add config for unpriviledged ports and icmp

* Updated to match true false variables of other setting
2022-12-09 06:16:12 -08:00
Chad Swenson 4f32f94a51
Fix drain rescue task when kube_override_hostname is set (#9556)
This fixes a task failure in the rescue block that uncordons nodes after an unsuccessful drain. The issue occurs when `kube_override_hostname` is set and does not match `inventory_hostname`.
2022-12-08 16:02:11 -08:00
Chad Swenson 3dc384a17a
Allow containerd-common to execute multiple times per play (#9543)
The `containerd-common` role is responsible for gathering OS specific variables from the vars directory of the roles that include or import it. `containerd-common` is imported via role dependency by a total of two roles, `container-engine/docker`, and `container-engine/containerd`.

containerd-common is needed by both the docker and containerd roles as a dependency when:
- containerd is selected as the container engine
- a docker install is detected and needs to be removed
- apt is the package manager

However, by default, roles can not be invoked more than once in the same play, unless `allow_duplicates: true` is set for that role. This results in the failure of the `containerd | Remove containerd repository` task, since only the docker vars will be loaded in the play, and `containerd_repo_info.repos`, normally populated by containerd/vars, is left empty.

This change sets `allow_duplicates: true` for `containerd-common` which fixes the currently failing containerd tasks if docker was detected and removed in the same play.
2022-12-08 15:58:18 -08:00
Samuel Liu f1d0d1a9fe
[kube-ovn]: update version v1.10.7 (#9527)
* [kube-ovn]: update version

* update readme
2022-12-08 15:58:11 -08:00
Mohamed Zaian c036a7d871
Disable 'Check that IP range is enough for the nodes' when calico is used (#9491) 2022-12-08 10:44:23 -08:00
yanggang 6e63f3d2b4
follow containerd1.16.12 (#9551)
Signed-off-by: yanggang <gang.yang@daocloud.io>

Signed-off-by: yanggang <gang.yang@daocloud.io>
2022-12-08 07:36:24 -08:00
yanggang 09748e80e9
support containerd 1.6.11 (#9544) 2022-12-06 19:08:37 -08:00
Brian King 44a4f356ba
Terraform Openstack: replace deprecated template provider with supported cloudinit provider (#9536) 2022-12-06 18:28:38 -08:00
Ugur Can Ozturk a0f41bf82a
[metrics_server]: Enabled HA mode by adding 'metrics_server_replicas'… (#9539)
* [metrics_server]: Enabled HA mode by adding 'metrics_server_replicas' variable and adding podAntiAffinity rule

Signed-off-by: Ugur Can Ozturk <57688057+ugur99@users.noreply.github.com>

* [metrics_server]: added namespaces selector

Signed-off-by: Ugur Can Ozturk <57688057+ugur99@users.noreply.github.com>

Signed-off-by: Ugur Can Ozturk <57688057+ugur99@users.noreply.github.com>
2022-12-06 18:22:38 -08:00
Kay Yan 5ae3e2818b
add-yankay-to-approvers (#9541) 2022-12-05 09:09:04 -08:00
Douglas Landgraf 1a0b81ac64
reset: RedHat based distro with major version >=8 (#9537)
During the reset, restart network was not completing in distros
like RHEL/CentOS/AlmaLinux with major version higher than 8.

Example:
kubespray> ansible-playbook -i inventory/mydomain/hosts.yml reset.yml -b -v
fatal: [mynode]: FAILED! => {"changed": false, "msg": "Could not find the requested service network: host"}

Signed-off-by: Douglas Schilling Landgraf <dlandgra@redhat.com>

Signed-off-by: Douglas Schilling Landgraf <dlandgra@redhat.com>
2022-12-05 08:57:03 -08:00
ERIK 20d99886ca
Update etcd log-level parameter name (#9540)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>

Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-12-05 01:05:03 -08:00
Kay Yan b9fe301036
add-check-for-resolv-to-avoid-coredns-crash (#9502) 2022-12-01 22:37:54 -08:00
Wojciech Marusiak b5844018f2
Corrected vsphere directory (#9534)
There is a wrong directory path to all.yml and vsphere.yml. The wrong directory is `inventory/sample/group_vars/all.yml` and `inventory/sample/group_vars/all/vsphere.yml` which should be `inventory/sample/group_vars/all/all.yml` and `inventory/sample/group_vars/all/vsphere.yml`.
2022-12-01 22:13:54 -08:00
Kay Yan 30508502d3
update-nginx-version (#9506) 2022-12-01 21:51:55 -08:00
Mohamed Zaian bca601d377
[ingress-nginx] upgrade to 1.5.1 (#9532) 2022-12-01 21:45:54 -08:00
Mohamed Zaian 65191375b8
[etcd] make etcd 3.5.6 default (#9520) 2022-12-01 14:41:53 -08:00
ERIK a534eb45ce
Update calico image tag (#9529)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>

Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-12-01 03:18:27 -08:00
tu1h e796f08184
update dashboard image repo to remove arch flag (#9530)
Signed-off-by: lihai.tu <lihai.tu@daocloud.io>

Signed-off-by: lihai.tu <lihai.tu@daocloud.io>
2022-12-01 01:42:26 -08:00
Kenichi Omichi ed38d8d3a1
Add ingress-nginx check for updating README (#9533)
To detect the version mismatch.
2022-12-01 01:16:27 -08:00
Fredrik Liv 07ad5ecfce
[upcloud] Fixed issue where DNS would be blocked while using allowlist (#9510)
* [upcloud] Fixed issue where DNS would be blocked while using allowlist

* Missed one NTP rule
2022-11-30 21:36:26 -08:00
Kay Yan 4db5e663c3
fix-mistake-regex-for-resolv-conf (#9523) 2022-11-30 03:48:56 -08:00
rtsp 529faeea9e
[cert-manager] Upgrade to v1.10.1 (#9512) 2022-11-29 07:17:26 -08:00
ERIK 47510899c7
Update the number of nofile limits in containerd (#9507)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>

Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-11-25 15:12:04 -08:00
Ayoub Ed-dafali 4cd949c7e1
Add missing zone input variable - Exoscale (#9495)
* Add missing zone input variable

* Fix terraform formatting
2022-11-24 16:30:04 -08:00
Kenichi Omichi 31d7e64073
Specify kubespray version for docker run (#9519)
When operating kubespray from kubespray image with docker run,
we need to checkout the specific kubespray version as the same as
the image, because the sample inventory contains kubernetes version
and the version of master branch could not be supported on the released
kubespray, for example.
2022-11-24 08:34:06 -08:00
蒋航 7c1ee142dd
update envoy image to v1.22.5 (#9513)
Signed-off-by: hang.jiang <hang.jiang@daocloud.io>

Signed-off-by: hang.jiang <hang.jiang@daocloud.io>
2022-11-23 19:26:05 -08:00
蒋航 25e86c5ca9
Update etcd image tag (#9516)
Signed-off-by: hang.jiang <hang.jiang@daocloud.io>

Signed-off-by: hang.jiang <hang.jiang@daocloud.io>
2022-11-23 18:22:04 -08:00
ERIK c41dd92007
Clean up cilium-init image (#9508)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>

Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-11-23 09:06:20 -08:00
ERIK a564d89d46
Update the tag of cilium hubble related images (#9509)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>

Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-11-21 20:14:14 -08:00
Kay Yan 6c6a6e85da
update-coredns-version (#9503) 2022-11-18 20:16:29 -08:00
Robin Wallace ed0acd8027
[openstack cloud controller] bump to v1.25.3 (#9500) 2022-11-18 04:26:31 -08:00
ERIK b9a690463d
Add docker support for openEuler linux (#9498)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>

Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-11-17 18:18:30 -08:00
Kenichi Omichi cbf4586c4c
Specify Quick mode for sonobuoy test (#9499)
The certified-conformance mode took 2+ hours and that was too long
by comparing Quick mode which was specified previously.
So this updates the mode to Quick again.
2022-11-16 21:54:39 -08:00
ERIK c3986957c4
Update runsc checksum (#9493)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>

Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-11-16 00:52:48 -08:00
ERIK 8795cf6494
Add support for the OpenEuler Linux (#9494)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>

Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-11-16 00:48:49 -08:00
yanggang 80af8a5e79
upgrade containerd_version to 1.6.10 (#9492)
Signed-off-by: yanggang <gang.yang@daocloud.io>

Signed-off-by: yanggang <gang.yang@daocloud.io>
2022-11-15 03:58:41 -08:00
Kenichi Omichi b60f65c1e8
Update sonobuoy version (#9485)
The latest version of sonobuoy is v0.56.11.
This updates the version to the latest.

As the file name, this makes it use certified-conformance mode
clearly for the latest version of sonobuoy.
2022-11-15 00:46:41 -08:00
Sergey Putko 943107115a
disable Centos Extras repo creation for OL9 (#9483)
Centos 9 doesn't exists, and Centos 9-stream also doesn't have extras repo.
2022-11-14 16:28:41 -08:00
Kenichi Omichi ddbe9956e4
Fix pathes of offline tool on the doc (#9486)
If clicking the links, we faced NotFound page at the time.
This fixes the issue by specifying full pathes instead.
2022-11-14 01:27:57 -08:00
Kenichi Omichi fdbcce3a5e
Update offline-environment.md (#9481)
This makes it more readable by explaining clearly what files are necessary
to be downloaded in advance from online environment.
2022-11-13 18:23:57 -08:00
Mohamed Zaian f007c77641
[etcd] make etcd 3.5.5 default for k8s 1.23 , 1.24 (#9482) 2022-11-12 03:39:56 -08:00
yanggang 9439487219
Add hashes for 1.25.4, 1.24.8, 1.23.14 and make v1.25.4 default (#9479)
Signed-off-by: yanggang <gang.yang@daocloud.io>

Signed-off-by: yanggang <gang.yang@daocloud.io>
2022-11-10 20:00:09 -08:00
emiran-orange df6da52195
Enable check mode in DNS Cleanup tasks (#9472) 2022-11-10 19:58:09 -08:00
cleverhu 6ca89c80af
fix error link kubernetes url (#9475)
Signed-off-by: cleverhu <shouping.hu@daocloud.io>

Signed-off-by: cleverhu <shouping.hu@daocloud.io>
2022-11-10 05:42:55 -08:00
Ilya Margolin 7fe0b87d83
Fix docs for node_labels (#9471) 2022-11-09 04:46:12 -08:00
ERIK 8a654b6955
Add cni bin when installing calico (#9367)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>

Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-11-08 17:46:13 -08:00
Ilya Margolin 5a8cf824f6
[containerd] Simplify limiting number of open files per container (#9319)
by setting a default runtime spec with a patch for RLIMIT_NOFILE.

- Introduces containerd_base_runtime_spec_rlimit_nofile.
- Generates base_runtime_spec on-the-fly, to use the containerd version
  of the node.
2022-11-08 06:44:32 -08:00
emiran-orange 5c25b57989
Ability to define options for DNS upstream servers (#9311)
* Ability to define options for DNS upstream servers

* Doc and sample inventory vars
2022-11-08 06:44:25 -08:00
Olivier Lemasle 5d1fe64bc8
Update local-volume-provisioner (#9463)
- Update and re-work the documentation:
  - Update links
  - Fix formatting (especially for lists)
  - Remove documentation about `useAlphaApi`,
    a flag only for k8s versions < v1.10
  - Attempt to clarify the doc
- Update to version 1.5.0
- Remove PodSecurityPolicy (deprecated in k8s v1.21+)
- Update ClusterRole following upstream
  (cf https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/pull/292)
- Add nodeSelector to DaemonSet (following upstream)
2022-11-07 15:28:17 -08:00
Kenichi Omichi a731e25778
Make vagrant-ubuntu20-flannel voting (#9469)
We made all vagrant jobs non-voting because those jobs were not stable.
However the setting allowed a pull request which broke vagrant jobs
completely merged into the master branch.
To avoid such situation, this makes one of vagrant jobs voting.
Let's see the stability of the job.
2022-11-07 00:08:16 -08:00
yanggang 0d6dc08578
upgrade argocd version 2.4.16 (#9467) 2022-11-06 18:04:16 -08:00
ERIK 40261fdf14
Fix iputils install failure in Kylin OS (#9453)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>

Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-11-06 17:54:16 -08:00
Cyclinder 590b4aa240
adjust calico-kube-controller to non-hostnetwork pod (#9465)
Signed-off-by: cyclinder qifeng.guo@daocloud.io

Signed-off-by: cyclinder qifeng.guo@daocloud.io
2022-11-06 17:34:17 -08:00
ausias-armesto 2a696ddb34
Adding metrics server to use host network (#9444)
* Adding metrics server to use host network

* EXternalize value to a variable
2022-11-06 02:38:15 -08:00
lijin-union d7f08d1b0c
remove the set_fact action which raise error in the CI (#9462) 2022-11-03 04:43:38 -07:00
Jiffs Maverick 4aa1ef28ea
Don't use coredns_server in dhclient.conf if nodelocaldns is enabled (#9392) 2022-11-03 02:45:36 -07:00
Fred Rolland 58faef6ff6
Flannel: fix init container image arch (#9461)
The install-cni-plugin image was not updated to the corresponding
arch when building the different DS.

Fixes issue #9460

Signed-off-by: Fred Rolland <frolland@nvidia.com>

Signed-off-by: Fred Rolland <frolland@nvidia.com>
2022-11-03 02:41:36 -07:00
cleverhu 34a52a7028
update cilium cli offline download url example (#9458)
Signed-off-by: cleverhu <shouping.hu@daocloud.io>

Signed-off-by: cleverhu <shouping.hu@daocloud.io>
2022-11-02 00:30:47 -07:00
yanggang ce751cb89d
add variable condition snapshot in vSphere CSI (#9429) 2022-11-02 00:22:46 -07:00
cleverhu 5cf2883444
add retry for start calico kube controller (#9450)
Signed-off-by: cleverhu <shouping.hu@daocloud.io>

Signed-off-by: cleverhu <shouping.hu@daocloud.io>
2022-11-02 00:18:45 -07:00
charlychiu 6bff338bad
fix: hubble relay tls error (#9457) 2022-11-02 00:14:46 -07:00
Olivier Lemasle c78862052c
Stop using python 'test' internal package (#9454)
`test` is is a internal Python package (see [doc]), and as such should not be
used here. It make tests fail in some environments.

[doc]: https://docs.python.org/3/library/test.html
2022-10-31 21:08:45 -07:00
William Turner 1f54cef71c
Add variable to set direct routing on flannel VXLAN (#9438) 2022-10-31 13:16:45 -07:00
yanggang d00508105b
Removed PodSecurityPolicy from ingress-nginx (#9448) 2022-10-30 20:08:44 -07:00
lijin-union c272421910
Add UOS linux support (#9432) 2022-10-30 17:16:43 -07:00
biqiang Wu 78624c5bcb
When using cilium CNI, install Cilium CLI (#9436)
Signed-off-by: dcwbq <biqiang.wu@daocloud.io>

Signed-off-by: dcwbq <biqiang.wu@daocloud.io>
2022-10-30 17:02:45 -07:00
biqiang Wu c681435432
Add switch cilium_enable_bandwidth_manager (#9441)
Signed-off-by: dcwbq <biqiang.wu@daocloud.io>

Signed-off-by: dcwbq <biqiang.wu@daocloud.io>
2022-10-28 03:08:31 -07:00
杨刚 4d3f637684
Remove PodSecurityPolicies in Metallb for kubernetes 1.25 (#9442) 2022-10-27 21:46:30 -07:00
Olivier Lemasle 5e14398af4
Upgrade ruamel.yaml.clib to work with Python 3.11 (#9426)
ruamel.yaml.clib did not build with the upcoming Python 3.11.

Cf. https://sourceforge.net/p/ruamel-yaml-clib/tickets/9/

ruamel.yaml.clib==0.2.7 fixes the issue.
2022-10-26 19:52:33 -07:00
蒋航 990f87acc8
Update kube-vip to v0.5.5 (#9437)
Signed-off-by: hang.jiang <hang.jiang@daocloud.io>

Signed-off-by: hang.jiang <hang.jiang@daocloud.io>
2022-10-26 19:28:32 -07:00
William Turner eeb376460d
Fix inconsistent handling of admission plugin list (#9407)
* Fix inconsistent handling of admission plugin list

* Adjust hardening doc with the normalized admission plugin list

* Add pre-check for admission plugins format change

* Ignore checking admission plugins value when variable is not defined
2022-10-26 00:28:37 -07:00
Kay Yan ef707b3461
update-containerd-1.6.9 (#9427) 2022-10-25 16:34:37 -07:00
Mohamed Zaian 2af918132e
Update kubernetes dashboard to 2.7.0 (k8s 1.25 support) (#9425) 2022-10-24 18:32:36 -07:00
Mohamed Zaian b9b654714e
[nerdctl] upgrade to version 1.0.0 (#9424) 2022-10-24 18:28:35 -07:00
Mohamed Zaian fe399e0e0c
[etcd] add 3.5.5 hashes, make it default for k8s 1.25 (#9419) 2022-10-24 00:06:26 -07:00
杨刚 b192053e28
as argocd 2.4.15 is releasesd , update the version (#9420) 2022-10-23 20:34:24 -07:00
杨刚 a84271aa7e
etcd arch can support arm64 and amd64 (#9421) 2022-10-23 20:28:24 -07:00
Wouter Goedhart 1901b512d2
Make the port of kube-vip dynamic based on the kube_apiserver_port (#9414)
variable

Fix wrong referenced variable on bgp_peers

Fix bgp_peeras field to be a string

Set default value for bgp_peeras
2022-10-23 18:00:24 -07:00
ERIK 9fdda7eca8
Fix iputils install failure in Kylin OS (#9416)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>

Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-10-21 04:53:51 -07:00
ERIK a68ed897f0
Update kubelet checksum (#9413)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>

Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-10-21 04:21:50 -07:00
Florian Ruynat 582ff96d19
Update docker version to 20.10.20 (#9410) 2022-10-20 18:45:15 -07:00
Kenichi Omichi 0374a55eb3
Specify securityContext for cert-manager (#9404)
On hardening environments, cert-manager pods could not be created
from the corresponding deployments. This adds the securityContext
to solve the issue.
2022-10-20 00:57:08 -07:00
Kay Yan ccbe38f78c
make-kube-1.25-default (#9364) 2022-10-20 00:56:57 -07:00
Vladimir 958840da89
Add var for control initialDelaySeconds in nginx ingress probe (#9405)
Signed-off-by: Zemtsov Vladimir <vl.zemtsov@gmail.com>

Signed-off-by: Zemtsov Vladimir <vl.zemtsov@gmail.com>
2022-10-19 21:20:56 -07:00
Cristian Calin 1530411218
use cri-o from upstream instead of kubic/OBS (#9374)
* [cri-o] use cri-o from upstream instead of kubic/OBS

* [cri-o] add proper molecule coverage

* [skopeo] download skopeo from upstream build

* [cri-o] clean up legacy deployments

* disable cri-o per-distribution variables
2022-10-19 05:47:05 -07:00
Kenichi Omichi e5ec0f18c0
Add packet_ubuntu20-calico-aio-hardening (#9359)
To verify the hardening method works always.
The configuration comes from docs/hardening.md

Fix yaml format of hardening.yml

Add condition to skip 040 test for hardening
2022-10-19 05:35:04 -07:00
Mohamed Zaian 0f44e8c812
[ingress-nginx] upgrade to 1.4.0 (#9403) 2022-10-18 16:53:00 -07:00
Kay Yan 1cc0f3c8c9 mirror-for-china 2022-10-18 09:17:42 +02:00
Maxime Leroy d9c39c274e
fix(defaults): wrong cri_socket path for containerd (#9401) 2022-10-18 00:15:18 -07:00
Kenichi Omichi c38fb866b7
Update securityContext of netchecker (#9398)
To run netchecker with necessary privilege,
this updates the securityContext.
2022-10-17 19:11:18 -07:00
Mohamed Zaian 5ad1d9db5e
[kubernetes] Add hashes for 1.25.3, 1.24.7, 1.23.13 and make v1.24.7 default (#9397) 2022-10-17 05:59:07 -07:00
Kay Yan 32f3d92d6b
Remove PodSecurityPolicies in Calico (#9395) 2022-10-17 05:51:07 -07:00
Kenichi Omichi 72b45eec2e
Use agnhost instead of busybox for network test (#9390)
busybox container requires a root permission for ping.
For testing hardening method at CI, we need to switch to another image
which doesn't require the root permission for network testing.
On kubernetes/kubernetes repo, we are using agnhost which doesn't
require it. So this makes the test use aghhost image.

In addition, this updates the test manifest to specify securityContext
without any privilege.
2022-10-14 06:10:46 -07:00
Cristian Calin 23716b0eff
don't define kubeadm_patches by default (#9372) 2022-10-14 01:20:46 -07:00
Kay Yan 859df84b45
remove-psp-in-flannel (#9365) 2022-10-14 00:16:47 -07:00
Kay Yan 131bd933a6
Fix ensure ping package error in fedora CoreOS & Flatcar (#9370)
* fix-ensure-package-in-coreos

* clean blank line
2022-10-13 16:54:46 -07:00
Unai Arríen 52904ee6ad
Avoid MetalLB speaker image download when MetalLB speaker is disabled (#9248)
* Avoid MetalLB speaker image download when metallb_speaker_enabled is set to

* Move metallb_speaker_enabled var to allow outside metalLB role references

* Move metallb_speaker_enabled var to allow outside metalLB role references

* Improve metallb_speaker_enabled default values
2022-10-13 16:50:47 -07:00
Kay Yan e3339fe3d8
update_calico_doc_for_the_ChecksumOffloadBroken (#9388) 2022-10-13 01:13:00 -07:00
ghostloda 547ef747da
fix helm install with password authentication (#9343) 2022-10-12 23:55:01 -07:00
Kenichi Omichi 63b27ea067
Fix YAML format in hardening.md (#9387)
When trying to add a hardening CI job by copying configuration from
hardening.md, yamllint CI job deleted invalid format.
This fixes it for maintaining the CI job.
2022-10-12 23:49:01 -07:00
ERIK bc5881b70a
Add the cilium hubble images to download role (#9376)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>

Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-10-12 23:45:00 -07:00
Kenichi Omichi f4b95d42a6
Add note for containerd oom_score (#9384)
When we saw 0 as the default value of containerd_oom_score, we had
a question why the value was not -999.
This adds the note to explain it.
2022-10-11 21:49:00 -07:00
Unai Arríen ef76a578a4
Change dns upstream condition for nodelocaldns (#9378) 2022-10-11 00:47:02 -07:00
Piotr Kowalczyk 3b99d24ceb
Fix: install calico-kube-controller on kdd (#9358)
* Fix: install policy controller on kdd too

* Remove the calico_policy_version condition altogether

* Install policy controller both on canal and calico under same condition
2022-10-10 19:45:01 -07:00
Kay Yan 4701abff4c
upgrade-api-version-for-PodDisruptionBudget (#9369) 2022-10-10 17:51:02 -07:00
Joe Siponen 717b8daafe
Download coredns image to all hosts in k8s_cluster (#9316)
Coredns image must be available everywhere as it
may be rescheduled to a non-control-plane-node.
2022-10-08 05:03:19 -07:00
Kevin Huang c346e46022
fix(cinder-csi-nodeplugin): Remove the pods-cloud-data volume (#9362) 2022-10-08 01:23:19 -07:00
Kenichi Omichi 24632ae81b
Add check_typo job (#9361)
To block merging pull requests which contain typo automatically.
2022-10-07 02:21:53 -07:00
JSpon befde271eb
Use hostname override in post-remove role, just as pre-remove role does (#9360) 2022-10-06 15:03:52 -07:00
Huang Chen-Yi d689f57c94
Features/support kubeadm patches v1beta3 (#9326)
* Support kubeadm patches in v1beta3

* Update kubeadm patches sample files in inventory

* Fix pre-commit syntax

* Set kubeadm_patches  enabled to false in sample inventory
2022-10-06 00:39:52 -07:00
William Turner ad3f503c0c
Fix default value for kubelet_secure_addresses (#9355) 2022-10-06 00:35:51 -07:00
Kay Yan ae6c780af6
add-Kubean (#9352) 2022-10-04 06:26:23 -07:00
Eugene Artemenko 8b9cd3959a
Add possibility to skip adding load balancer name in the hosts file (#9331) 2022-10-04 06:26:16 -07:00
Emin AKTAS dffeab320e
feat: add a paramater to disable host nameservers (#9357)
Signed-off-by: eminaktas <eminaktas34@gmail.com>

Signed-off-by: eminaktas <eminaktas34@gmail.com>
2022-10-04 06:22:17 -07:00
Kay Yan 999586a110
sysctl_additional (#9351) 2022-10-02 23:06:14 -07:00
Kenichi Omichi f8d5487f8e
Remove versions from setting-up-your-first-cluster (#9353)
We are maintaining version info on the README.md, and it is not
necessary to maintain that on setting-up-your-first-cluster.md
2022-09-30 06:02:29 -07:00
Hugo Blom 4189008245
Try fix issue where ports doesn't get an ip assigned (#9345)
Co-authored-by: Jonathan Süssemilch Poulain <jonathan@sofiero.net>
2022-09-30 00:48:29 -07:00
Kay Yan 44115d7d7a
support-kube-1.25 (#9260)
Co-authored-by: Rene Luria <rene.luria@infomaniak.com>
2022-09-29 23:34:30 -07:00
Florian Ruynat 841e2f44c0
Remove references to 1.22 (#9342) 2022-09-28 14:10:29 -07:00
Hugo Blom a8e4984cf7
Add missing permissions to openstack cc (#9335)
Add missing permissions to Openstack cloud controller to make sure controller runs as intended
2022-09-27 22:19:35 -07:00
Hugo Blom 49196c2ec4
[Openstack] Add bastion_allowed_ports to allow custom security group rules on bastion node (#9336)
* make it possible to configure bastion remote ips

* Update README.md
2022-09-27 22:03:35 -07:00
Rene Luria 3646dc0bd2
fix: remove trailing backslash and yaml indent (#9339)
* fix: remove trailing backslash

* fixed indent in cilium config template
2022-09-27 19:45:35 -07:00
Alex 694de1d67b
update README to reference docker v2.20.0 tag (#9334) 2022-09-27 19:41:36 -07:00
biqiang Wu 31caab5f92
Fix: The Hubble certificate is faulty because the cluster name is hard coded (#9340)
Signed-off-by: dcwbq <biqiang.wu@daocloud.io>

Signed-off-by: dcwbq <biqiang.wu@daocloud.io>
2022-09-27 05:57:52 -07:00
ERIK 472996c8b3
update pause image version (#9337)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>

Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-09-27 00:49:52 -07:00
Shelming.Song d62c67a5f5
allow user to set env: FELIX_MTUIFACEPATTERN in calico-node.yml (#9330) 2022-09-26 21:57:45 -07:00
Federico Cucinella e486151aea
cloud-provider-openstack: upgrade 1.22.0 to 1.23.4 (#9332) 2022-09-26 17:35:46 -07:00
Florian Ruynat 9c407e667d
Update kubespray version following release (#9333) 2022-09-26 17:31:46 -07:00
Ho Kim 18efdc2c51
Fix typos in calico (#9327) 2022-09-26 00:11:44 -07:00
Zhong Jianxin 6dff39344b
preinstall: Add nodelocaldns to supersede_nameserver if enabled (#9282)
When a machine that use dhclient and resolvconf reboots, this will make /etc/resolv.conf
remain close to the one before reboot
2022-09-25 20:19:44 -07:00
Robin Wallace c4de3df492
upcloud csi driver: bump version to v0.3.3 (#9317) 2022-09-24 13:18:04 -07:00
Ilya Margolin f2e11f088b
Hotfix containerd restart (#9322) 2022-09-24 13:14:04 -07:00
Victor Morales 782f0511b9
Define ostree variable for runc (#9321)
The ostree variable is not defined previously raising an error when
the runtime tries to read it.
2022-09-24 13:00:11 -07:00
Kevin Huang fa093ee609
feat(docs/openstack.md): Put Additional step needed when using calico or kube-router in own section (#9320) 2022-09-24 13:00:04 -07:00
Samuel Liu 612bcc4bb8
add liupeng0518 to approvers list (#9313) 2022-09-24 12:52:05 -07:00
Florian Ruynat 4ad67acedd
Move back vsphere csi to kube-system ns (#9312) 2022-09-23 10:46:26 -07:00
Kei Kori 467dc19cbd
support removing options in resolvconf with tab separator (#9304) 2022-09-23 10:42:27 -07:00
Ilya Margolin 726711513f
[containerd] Allow configuring base_runtime_spec per containerd runtime (#9302)
and supply a default runtime spec.
2022-09-23 10:38:27 -07:00
Emin AKTAS 9468642269
feat: allows users to have more control on DNS (#9270)
Signed-off-by: eminaktas <eminaktas34@gmail.com>

Signed-off-by: eminaktas <eminaktas34@gmail.com>
2022-09-23 10:28:26 -07:00
Samuel Liu d387d4811f
replace createhome (#9314) 2022-09-23 00:26:39 -07:00
Kay Yan 1b3c2dab2e
add_max_concurrent_in_coredns (#9307) 2022-09-22 04:27:03 -07:00
Mohamed Zaian 76573bf293
[kubernetes] Add hashes for 1.24.6, 1.22.15, 1.23.12 and make v1.24.6 default (#9308) 2022-09-22 04:13:03 -07:00
Kay Yan 5d3326b93f
add-ping-package (#9284) 2022-09-21 23:55:05 -07:00
Mohamed Zaian 68dac4e181
[flannel] update to v1.19.2 & make it default (#9296) 2022-09-21 23:51:04 -07:00
Ilya Margolin 262c96ec0b
Remove duplication in template (#9301)
by concatenating default and additional runtimes
2022-09-21 08:33:15 -07:00
Mohamed Zaian 2acdc33aa1
[helm] upgrade to 3.9.4 (#9298) 2022-09-20 04:37:20 -07:00
Krystian Młynek 8acd33d0df
Calico: add wireguard support for Rocky Linux 9 (#9287) 2022-09-20 00:29:20 -07:00
pingrulkin a2e23c1a71
vsphere-csi: add nodeAffinity to daemonset (#9293) 2022-09-19 17:47:22 -07:00
rtsp 1b5cc175b9
[cert-manager] Upgrade to v1.9.1 (#9295) 2022-09-19 17:43:22 -07:00
Mohamed Zaian a71da25b57
[argocd] update argocd to v2.4.12 (#9297) 2022-09-19 17:37:22 -07:00
Vadim 5ac614f97d
fix duplicate field in ingress-nginx template (#9285) 2022-09-19 03:03:22 -07:00
ErmalKristo b8b8b82ff4
Adds support for multiple architectures to yq (#9288) 2022-09-19 02:14:38 -07:00
Necatican Yıldırım 7da3dbcb39
Cilium 1.12 Upgrade (#9225)
* Drop support for Cilium < 1.10

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* Synchronize Cilium templates for 1.11.7

Signed-off-by: necatican <contact@necatican.com>

* Set Cilium v1.12.1 as the default version

Signed-off-by: necatican <contact@necatican.com>

Signed-off-by: necatican <necaticanyildirim@gmail.com>
Signed-off-by: necatican <contact@necatican.com>
2022-09-19 02:14:31 -07:00
Mohamed Zaian 680293e79c
[kubernetes] Add hashes for 1.24.5, 1.22.14, 1.23.11 and make v1.24.5 default (#9286) 2022-09-19 02:10:31 -07:00
Mahdi Abbasi 023b16349e
Add variable for the vsphere-csi namespace (#9278) 2022-09-15 02:01:23 -07:00
lijin-union c4976437a8
Fix typos in docs (#9276) 2022-09-15 00:09:22 -07:00
Kay Yan 97ca2f3c78
add-timezone-support (#9263) 2022-09-14 21:11:22 -07:00
niesel e76385e7cd
Update offline.yml (#9274)
Change "ubuntu_repo" to "debian_repo" for containerd_debian_repo_base_url and containerd_debian_repo_gpgkey
2022-09-13 16:55:01 -07:00
ERIK 7c2fb227f4
Add LimitMEMLOCK parameter configuration in containerd.service (#9269)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>

Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-09-13 02:51:06 -07:00
ghostloda 08bfa0b18f
Upgrade ingress nginx webhook to 1.3.0 (#9271) 2022-09-13 01:47:05 -07:00
Ho Kim 952cad8d63
Remove mutual exclusivity in calico: NAT and router mode (#9255)
* Add optional NAT support in calico router mode

* Add a blank line in front of lists

* Remove mutual exclusivity: NAT and router mode

* Ignore router mode from NAT

* Update calico doc
2022-09-13 00:19:07 -07:00
rptaylor 5bce39abf8
add optional parameter extra_groups for k8s_nodes (#9211) 2022-09-13 00:13:08 -07:00
cleverhu fc57c0b27e
fix number node name can't be added (#9266)
Signed-off-by: cleverhu <shouping.hu@daocloud.io>

Signed-off-by: cleverhu <shouping.hu@daocloud.io>
2022-09-13 00:09:05 -07:00
Samuel Liu dd4bc5fbfe
[etcd] Sometimes, we do not need to run etcd role on all nodes. (#9173)
* WIP: sometimes,we not run etcd

* fix ansible lint

* like calico(kdd) cni, no need run etcd
2022-09-09 01:29:22 -07:00
Mohamed Zaian d2a7434c67
[ingress-nginx] upgrade to 1.3.1 (#9264) 2022-09-09 00:37:23 -07:00
Kenichi Omichi 5fa885b150
Remove unused cri_dockerd_enabled configuration (#9259)
Since the commit fad296616c cri_dockerd_enabled
has not been used. But the packet_ubuntu22-aio-docker.yml still contains
the configuration and causes confusions.
This removes the configuration for cleanup.
2022-09-08 00:06:05 -07:00
ghostloda f3fb758f0c
Remove useless file (#9258) 2022-09-07 17:10:49 -07:00
Krystian Młynek 6386ec029c
add retries for restart of kube-apiserver (#9256)
* add retries for restart of kube-apiserver

* change var name
2022-09-07 16:48:49 -07:00
Ho Kim ad7cefa352
Ignore deleting nodes that are not in cluster (#9244) 2022-09-05 19:50:54 -07:00
Ho Kim 09d9bc910e
Fix typos in calico comments (#9254) 2022-09-05 18:46:54 -07:00
Kay Yan e2f1f8d69d
add-Rocky-9-support (#9212) 2022-09-04 16:54:36 -07:00
Michael Schmitz be2bfd867c
Add Support for Rewrite Plugin to CoreDNS/NodelocalDNS (#9245) 2022-09-03 16:16:35 -07:00
lou-lan 133a7a0e1b
Add featureDetectOverride configration of calico (#9249) 2022-09-02 04:58:05 -07:00
ERIK efb47edb9f
Update kubespray version to v2.19.1 (#9241)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>

Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-09-01 23:36:05 -07:00
Kay Yan 36bec19a84
add-yankay-to-reviewers (#9247) 2022-09-01 03:47:05 -07:00
Cristian Calin 6db6c8678c
disable kubelet_authorization_mode_webhook by default (#9238) 2022-08-31 04:53:00 -07:00
Florian Ruynat 5603f9f374
Update security contacts file (#9235) 2022-08-30 22:43:00 -07:00
蒋航 7ebb8c3f2e
make calico installation more stable (#9227)
Signed-off-by: hang.jiang <hang.jiang@daocloud.io>

Signed-off-by: hang.jiang <hang.jiang@daocloud.io>
2022-08-30 21:13:01 -07:00
Alessio Greggi acb6f243fd
feat: add kubelet systemd service hardening option (#9194)
* feat: add kubelet systemd service hardening option

* refactor: move variable name to kubelet_secure_addresses

Co-authored-by: Cristian Calin <6627509+cristicalin@users.noreply.github.com>

* docs: add diagram about kubelet_secure_addresses variable

Co-authored-by: Cristian Calin <6627509+cristicalin@users.noreply.github.com>
2022-08-30 11:18:55 -07:00
tasekida 220f149299
Fix abort because calicoctl.sh is not a full path (#9217) 2022-08-30 08:07:02 -07:00
Florian Ruynat 1baabb3c05
Fix cloud_init files for different distros (#9232) 2022-08-30 08:03:02 -07:00
Florian Ruynat 617b17ad46
Fix kube_ovn_hw_offload value (#9218) 2022-08-30 03:21:01 -07:00
lijin-union 8af86e4c1e Fix typo. 2022-08-30 11:30:57 +02:00
kakkotetsu 9dc9a670a5
add runc v1.1.4 (#9230) 2022-08-30 02:01:01 -07:00
Kay Yan b46ddf35fc
kube-vip shoud fail if kube_proxy_strict_arp is false in arp mod (#9223)
* fix-kube-vip-strict-arp

* fix-kube-vip-strict-arp
2022-08-30 00:21:02 -07:00
Chad Swenson de762400ad
Fixes for calico_datastore: etcd (#9228)
It seems that PR #8839 broke `calico_datastore: etcd` when it removed ipamconfig support for etcd mode.

This PR fixes some failing tasks when `calico_datastore == etcd`, but it does not restore ipamconfig support for calico in etcd mode. If someone wants to restore ipamconfig support for `calico_datastore: etcd` please submit a follow up PR for that.
2022-08-29 22:41:00 -07:00
Cristian Calin e60ece2b5e
[CI] remove opensuse Leap from molecule test blocking CI (#9229) 2022-08-29 11:44:49 -07:00
Cristian Calin e6976a54e1
add pre-commit hook to facilitate local testing (#9158)
* add pre-commit hook configuration

* add tmp.md to .gitignore

* describe the use of pre-commit hook in CONTRIBUTING.md

* fix docs/integration.md errors identified by markdownlint

* fix docs/<file>.md errors identified by markdownlint

* docs/azure-csi.md
* docs/azure.md
* docs/bootstrap-os.md
* docs/calico.md
* docs/debian.md
* docs/fcos.md
* docs/vagrant.md
* docs/gcp-lb.md
* docs/kubernetes-apps/registry.md
* docs/setting-up-your-first-cluster.md
* docs/vagrant.md
* docs/vars.md

* fix contrib/<file>.md errors identified by markdownlint
2022-08-24 06:54:03 -07:00
Krystian Młynek 64daaf1887
cri-dockerd: add restart of docker.service (#9205)
* cri-dockerd: add restart of docker.service

* remove enabling of cri-dockerd.socket
2022-08-24 05:50:02 -07:00
Sergey 1c75ec9ec1
do not run etcd role in scale.yml playbook when etcd installed by kubeadm (#9210) 2022-08-24 00:16:24 -07:00
Shelming.Song c8a61ec98c
optimize the format of evictionHard in kubelet-config.yaml template (#9204) 2022-08-23 01:55:24 -07:00
Bishal das aeeae76750
Update vars.md (#9172) 2022-08-22 23:31:24 -07:00
Shelming.Song 30b062fd43
fix one bug in docs/nodes (#9203) 2022-08-22 23:17:23 -07:00
Pavel Chekin 8f899a1101
Fix containerd (<1.7) configuration for insecure registries (#9207)
For the following configuration

```
    containerd_insecure_registries:
      docker.io:
        - dockerhubcache.example.com
```

the rendered /etc/containerd/config.toml contains

```
        [plugins."io.containerd.grpc.v1.cri".registry.configs."docker.io".tls]
          insecure_skip_verify = true
```

but it needs to be

```
        [plugins."io.containerd.grpc.v1.cri".registry.configs."dockerhubcache.example.com".tls]
          insecure_skip_verify = true
```
2022-08-22 23:13:23 -07:00
Mostafa Ghadimi 386c739d5b
🌱 Enable cri-dockerd service (#9201)
* 🌱 Enable cri-dockerd service

* 🔨 Fix the task name in order to pass the CI tests
2022-08-22 07:17:43 -07:00
Bishal das fddff783c8
Update vsphere-csi.md (#9170) 2022-08-22 07:13:43 -07:00
Tristan bbd1161147
9035: Make Cilium rolling-restart delay/timeout configurable (#9176)
See #9035
2022-08-22 02:37:44 -07:00
Mohamed Zaian ab938602a9
[kubernetes] Add hashes for 1.24.4, 1.22.13, 1.23.10 and make v1.24.4 default (#9191) 2022-08-21 23:11:44 -07:00
Ho Kim e31890806c
Add 'avoid-buggy-ips' support of MetalLB (#9166) 2022-08-18 21:49:51 -07:00
Tomas Zvala 30c77ea4c1
Add the option to enable default Pod Security Configuration (#9017)
* Add the option to enable default Pod Security Configuration

Enable Pod Security in all namespaces by default with the option to
exempt some namespaces. Without the change only namespaces explicitly
configured will receive the admission plugin treatment.

* Fix the PR according to code review comments

* Revert the latest changes

- leave the empty file when kube_pod_security_use_default, but add comment explaining the empty file
- don't attempt magic at conditionally adding PodSecurity to kube_apiserver_admission_plugins_needs_configuration
2022-08-18 01:16:36 -07:00
GreatLazyMan 175cdba9b1
Add 'flush ip6tables' task in reset role (#9168)
* Add 'flush ip6tables' task in reset role 

If enable_dual_stack_networks is set to true and ip6 is defined,ip6tables will be created. But when reset the kubernetes cluster, kubespray doesn't flush ip6tables.

* [CI] fix molecule tests on opensuse by upgrading to 15.4 (#9175)

* [CI] fix molecule tests on opensuse by upgrading to 15.4

* [opensuse] use correct python crytography package name depending on distribution version

Co-authored-by: Cristian Calin <6627509+cristicalin@users.noreply.github.com>
2022-08-18 01:12:37 -07:00
Thearas ea29cd0890
add list nodes rules to cilium-operator clusterrole (#9178) 2022-08-18 01:02:36 -07:00
maxgio92 68653c31c0
docs(kube-vip): fix broken links (#9165)
Signed-off-by: Massimiliano Giovagnoli <me@maxgio.it>

Signed-off-by: Massimiliano Giovagnoli <me@maxgio.it>
2022-08-18 00:56:55 -07:00
Ho Kim be5fdab3aa
Disable DNSStubListener for Flatcar Linux (#9160)
* Disable DNSStubListener for Flatcar Linux

* Fix missing "Flatcar" condition of os_family
2022-08-18 00:56:49 -07:00
Robin Ramquist f4daf5856e
Subnet setup order fix & Number of master nodes syntax fix (#9159)
* Subnet setup order fix & Number of master nodes syntax fix

* Mistake fix!

* Formatting
2022-08-18 00:56:43 -07:00
Piotr Kowalczyk 49d869f662
Fix CSI drivers issues on Azure (#9153)
* Include missing azuredisk rbac manifest

* Remove missing azure csi manifest

* Remove invalid reference mount to waagent settings

* Use cloud-config secret instead of /etc/kubernetes/cloud_config file
2022-08-18 00:56:36 -07:00
Samuel Liu b36bb9115a
[calico] calico rr supports multiple groups (#9134)
* update calico rr

* fix bgppeer conf

* fix yamllint

* fix ansible lint

* fix calico deploy

* fix yamllint

* fix some typo
2022-08-18 00:52:37 -07:00
ERIK 9ad2d24ad8
Add unsafe_show_logs switch (#9164)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>

Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-08-16 18:52:48 -07:00
Kay Yan 0088fe0ab7
add-tar-in-common-package (#9184) 2022-08-16 05:17:18 -07:00
Mohamed Zaian ab93b17a7e
[containerd] upgrade to 1.6.8 , add hashes, containerd now supports ppc64le from v1.6.7 (#9181) 2022-08-16 05:17:07 -07:00
Jin Li 9f1b980844
Update dashboard to 2.6.1 (#9185) 2022-08-16 04:57:08 -07:00
Alessio Greggi 86d05ac180
fix: remove condition for user creation (#9125)
This condition blocks the creation of the `etcd` user in certain conditions.
Specifically, when you have a `etcd_deployment_type: kubeadm` and `kube_owner: root`.
Being the `root` user already present on the system, this will not be a problem (due to the idempotency of ansible).
2022-08-15 23:55:07 -07:00
Peter Pan bf6fcf6347
Upgrade nerdctl from 0.20.0 to 0.22.2 (#9180) 2022-08-15 22:39:07 -07:00
Cristian Calin b9e4e27195
[CI] fix molecule tests on opensuse by upgrading to 15.4 (#9175)
* [CI] fix molecule tests on opensuse by upgrading to 15.4

* [opensuse] use correct python crytography package name depending on distribution version
2022-08-14 19:02:13 -07:00
Cristian Calin 8585134db4
when ingress-nginx is deployes without a class, we need to use 'ingress-controller-leader' resource instead of the default 'ingress-controller-leader-nginx' (#9156) 2022-08-09 04:52:50 -07:00
Kenichi Omichi 7e862939db
Add kube-vip check to check_readme_versions.sh (#9155)
To check the kube-vip version between readme.md and the default value
on the role, this updates check_readme_versions.sh
2022-08-06 08:26:20 -07:00
Kay Yan 0d3bd69a17
add-kube-vip-in-readme (#9149) 2022-08-05 08:13:47 -07:00
emiran-orange 2b97b661d8
Move old etcd backup removal after etcd restart (#9147) 2022-08-05 08:09:59 -07:00
emiran-orange 24f12b024d
Argument jsonpath must be single-quoted in "See if node is schedulable" task (#9146) 2022-08-05 08:09:47 -07:00
Florian Ruynat f7d363dc96
Fix crio version in README (#9148) 2022-08-04 08:53:46 -07:00
ERIK 47050003a0
Add docker support for Kylin V10 (#9144)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-08-03 15:03:46 -07:00
Florian Ruynat 4df6e35270 Move oracle7-canal to centos7-canal 2022-08-02 16:55:52 -07:00
Florian Ruynat 307f598bc8 Move flannel to etcd datastore 2022-08-02 16:55:52 -07:00
Florian Ruynat eb10249a75 Align canal templates with calico official ones (k8s datastore) 2022-08-02 16:55:52 -07:00
Marco Fortina b4318e9967
Update to latest local path provisioner version (#9132) 2022-08-01 14:56:28 -07:00
Marco Fortina c53561c9a0
Update to latest registry version (#9133) 2022-08-01 14:52:28 -07:00
ERIK f2f9f1d377
Add kylin OS support (#9078)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-08-01 10:44:29 -07:00
Boris Barnier 4487a374b1
Update Kube-router version to 1.5.1 (#9136)
https://github.com/cloudnativelabs/kube-router/releases/tag/v1.5.1
2022-08-01 00:16:28 -07:00
Aveline 06f8368ce6
Fix Hetzner CCM cluster-cidr (#9127) 2022-07-30 20:18:27 -07:00
Mohamed Zaian 5b976a8d80
[calico] add hashes for v3.22.4 & v3.21.6 (#9129) 2022-07-30 20:14:38 -07:00
Samuel Liu e73803c72c
pid reserved must be str (#9124) 2022-07-30 20:14:27 -07:00
rtsp b3876142d2
[cert-manager] Upgrade to v1.9.0 (#9117) 2022-07-29 00:11:11 -07:00
Mohamed Zaian 9f11946f8a
[argocd] update argocd to v2.4.7 (#9105) 2022-07-27 09:32:29 -07:00
Kenichi Omichi 9c28f61dbd
Enable shellcheck for contrib/ (#9122)
Today we have many contributions to contrib/offline/ and some PRs
contained invalid coding style for those scripts.
This enables shellcheck to make such invalid coding style easily.
2022-07-26 23:32:32 -07:00
Ader Fu 09291bbdd2
Use a variable for roles of remove-node/post-remove (#9096)
Signed-off-by: ydFu <ader.ydfu@gmail.com>
2022-07-26 10:51:09 -07:00
Florian Ruynat 7fa6314791
Add ignore_assert_error to ubuntu20 etcd ha job (#9108) 2022-07-26 10:45:09 -07:00
Mohamed Zaian 65d95d767a
[helm] upgrade to 3.9.2 (#9115) 2022-07-26 10:41:09 -07:00
Denis Khachyan 8306adb102
update cilium to v1.11.7 (#9119) 2022-07-26 10:33:11 -07:00
ERIK 4b3db07cdb
Fix calicoctl version to v3.23.3 (#9121)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-07-26 10:29:10 -07:00
gssjl2008 c24a3a3b15
Keep the style consistent (#9116) 2022-07-24 23:46:59 -07:00
Mohamed Zaian aca6be3adf
[calico] add v3.23.3 and make it default (#9112) 2022-07-22 00:01:39 -07:00
rptaylor 9617532561
git ignore .terraform.lock.hcl anywhere (#9109) 2022-07-21 23:07:38 -07:00
Florian Ruynat ff5e487e32 Add retries to api servers response 2022-07-21 23:03:38 -07:00
Florian Ruynat 9c51ac5157 Switch fedora36se to 35 and 35docker to 36 2022-07-21 23:03:38 -07:00
Florian Ruynat 07eab539a6 Add Fedora 36 support and CI, remove Fedora 34 (eol) 2022-07-21 23:03:38 -07:00
Florian Ruynat a608a048ad Update kube-ovn to v1.9.7 2022-07-21 23:03:38 -07:00
Mohamed Zaian 0cfa03fa8a
[flannel] update to v1.18.1 & make it default (#9104) 2022-07-21 00:19:55 -07:00
忘尘 6525461d97
Add reset tasks specific to calico network_plugin (#9103) 2022-07-19 13:15:27 -07:00
Kay Yan f592fa1235
add kube-vip sans (#9099) 2022-07-19 13:11:28 -07:00
Cyclinder 2e1863af78
feat: change default blockSize for calico (#9055)
Signed-off-by: cyclinder qifeng.guo@daocloud.io
2022-07-19 13:05:27 -07:00
Kay Yan 2a282711df
update-loadbalancers-versions (#9100) 2022-07-19 13:01:28 -07:00
Mohamed Zaian 91073d7379
[kubernetes] make v1.24.3 default (#9101) 2022-07-19 02:58:06 -07:00
Alessio Greggi 3ce5458f32
hardening: Add SeccompDefault admission plugin for kubelet (#9074)
* docs(hardening): add SeccompDefault admission plugin to kubelet feature gates

* fix(kubelet-config): enable config through kubelet_feature_gates

* feat(kubelet): add kubelet_seccomp_default variable
2022-07-19 00:50:07 -07:00
Marco Fortina 98c194735c
[kubernetes] add hashes for v1.22.12, v1.23.9 & v1.24.3 (#9092) 2022-07-19 00:30:19 -07:00
pil57852 626ea64f66
9052 crio add dpkg hold (#9075)
* Update main.yaml

* remove version in dpkg_selection name

* make lint happy

* Fix typo

* add comment / remove useless contition

* remove dpkg hold in reset tasks
2022-07-19 00:30:07 -07:00
Ajarmar 0d32c0d92b
[upcloud] Add firewall default deny policy and port allowlisting (#9058) 2022-07-19 00:18:06 -07:00
Mohamed Zaian ce04fdde72
[ingress-nginx] upgrade to 1.3.0 (#9088)
* This release removes support for Kubernetes v1.19.0
* This release adds support for Kubernetes v1.24.0
* Starting with this release, we will need permissions on the coordination.k8s.io/leases resource for leaderelection lock
2022-07-14 18:46:25 -07:00
ERIK 4ed3c85a88
Fix calicoctl checksums for v3.23.2 (#9087)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-07-13 14:02:57 -07:00
Peter Pan 14063b023c
Extend DNS memory limit. 170Mi tents to OOM (#9084) 2022-07-13 00:03:37 -07:00
yjqg6666 3d32f0e953
[#9067] archive offline-files and support env-var NO_HTTP_SERVER to skip nginx-running (#9068) 2022-07-12 00:24:52 -07:00
Samuel Liu d821bed2ea
Fix some typo (#9056)
* fix ingress controller task name

* fix calico word

* add check typo
2022-07-11 09:49:48 -07:00
ERIK 058e05df41
Add cri-dockerd url for offline.yml (#9079)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-07-11 06:45:49 -07:00
Mohamed Zaian a7ba7cdcd5
[calico] add v3.23.2 and make it default (#9041) 2022-07-08 10:41:48 -07:00
Kenichi Omichi c01656b1e3
Allow "openSUSE Tumbleweed" to be run (#9072)
The commit 1ce2f04 tried to merge multiple SUSE OS checks including
"openSUSE Leap" and "openSUSE Tumbleweed" into a single SUSE, but
that was a perfect change.
Then the commit c16efc9 tried to fix it for "openSUSE Leap", but it
didn't take care of "openSUSE Tumbleweed".
Then this adds "openSUSE Tumbleweed" to the OS check.
2022-07-08 04:55:47 -07:00
Emin AKTAS 5071529a74
feat: upgrade cilium and add default variables (#9065)
Signed-off-by: eminaktas <eminaktas34@gmail.com>
Signed-off-by: Emin Aktas <emin.aktas@trendyol.com>
2022-07-07 10:35:34 -07:00
yasintahaerol 6d543b830a
Fix vcloud-csi bug related to #9046 (#9066)
* Fix vcloud-csi bug related to #9046

Signed-off-by: yasintahaerol <yasintahaerol@gmail.com>

* add supervisor-fss-namespace=kube-system flag to vsphere-csi-controller-deployment

Signed-off-by: yasintahaerol <yasintahaerol@gmail.com>
2022-07-07 10:31:35 -07:00
Cyclinder e6154998fd
fix calico tunl0 routes test (#9061)
Signed-off-by: cyclinder qifeng.guo@daocloud.io
2022-07-06 04:52:49 -07:00
rptaylor 01c6239043
increase ansible fact_caching_timeout (#9059) 2022-07-06 01:04:51 -07:00
Emin AKTAS 4607ac2e93
fix(vsphere-csi): remove namespace env variable and set namespace as kube-system (#9046)
Signed-off-by: eminaktas <eminaktas34@gmail.com>
Co-authored-by: Yasin Taha Erol <yasintahaerol@gmail.com>

Co-authored-by: Yasin Taha Erol <yasintahaerol@gmail.com>
2022-07-06 01:00:50 -07:00
Kay Yan 9ca5632582
fix-docker-option-in-centos-arm64 (#9047) 2022-07-05 08:26:47 -07:00
Mohamed Zaian 51195212b4
[argocd] update argocd to v2.4.3 (#9050) 2022-07-05 08:22:47 -07:00
Kenichi Omichi 7414409aa0
Add target components on check_readme_versions.sh (#9045)
This adds target components on check_readme_versions.sh after
merging https://github.com/kubernetes-sigs/kubespray/pull/9044
In addition, this fixes typo on check_readme_versions.sh

This adds `foo_version` variables for some components because
check_readme_versions.sh verifies the corresponding version for
`<component name>_version` from main.yml. This change also makes
consistency in the main.yml. In long-term, we will be able to
remove the existing `foo_image_tag` variables, but that is not now
for backwards compatibility for users.
2022-07-05 08:02:47 -07:00
Kay Yan adfd77f11d
add-test-for-kubeadm-etcd-deployment (#9007) 2022-07-05 07:58:47 -07:00
Kenichi Omichi f3ea8cf45e
Add Rocky Linux 8 support for vagrant (#8905)
To test Kubespray on Rocky Linux 8 with vagrant, this adds it to
the Vagrantfile.
2022-07-05 07:50:47 -07:00
h9-HSFRQDH 3bb9542606
Adding support for node & pod pid limit (#9038) 2022-07-05 00:20:48 -07:00
Kay Yan 1d0b3829ed
remove-etcd-unsupported-arch (#9049) 2022-07-04 05:39:24 -07:00
Samuel Liu a5d7178bf8
[docs] update supported components (#9044) 2022-06-29 23:50:07 -07:00
Calin Cristian Andrei cbef8ea407 [etcd] drop hashes for 3.5.2 2022-06-29 09:44:06 -07:00
Calin Cristian Andrei 2ff4ae1f08 [etcd] drop hashes for 3.5.1 2022-06-29 09:44:06 -07:00
Calin Cristian Andrei edf7f53f76 [etcd] add etcd 3.5.4 and make it the default for 1.24.x 2022-06-29 09:44:06 -07:00
Samuel Liu f58816c33c
[krew] update krew (#9043) 2022-06-29 09:02:06 -07:00
忘尘 1562a9c2ec
add missing verbs (#9032) 2022-06-29 00:18:05 -07:00
Kenichi Omichi 6cd243f14e
Add component version check for README.md (#9042)
During code-review, reviwers needed to take care of README.md also
should be updated when the pull request updated component versions.
This adds the corresponding check to reduce reviwer's burden.
2022-06-29 00:14:05 -07:00
Kay Yan 4b03f6c20f
add-managed-ntp-support (#9027) 2022-06-28 13:15:34 -07:00
boeto d0a2ba37e8
update deprecated syntax (#9040)
* `ansible.builtin.include` removed in version 2.16

Read the `ansible.builtin.include DEPRECATED` doc:

 https://docs.ansible.com/ansible/latest/collections/ansible/builtin/include_module.html#deprecated

* Update integration.md
2022-06-28 13:11:34 -07:00
Samuel Liu e8ccbebd6f
add ingress nginx webhook (#9033)
* add ingress nginx webhook

* fix ingress nginx template
2022-06-28 11:55:35 -07:00
Kay Yan d4de9d096f
fix-the-issue-of-miss-the-etcd-user (#9016) 2022-06-28 09:13:58 -07:00
Tom Stian Berget e1f06dd406
Add support for the updated (startup|liveness|readiness)Probe.Port numbers in Cilium (#9031) 2022-06-27 11:00:59 -07:00
rptaylor 6f82cf12f5
let containerd_default_runtime be undefined by default (#9026) 2022-06-27 10:56:59 -07:00
Calin Cristian Andrei ca8080a695 [crun] drop old crun versions 1.2 and 1.3 2022-06-27 10:36:59 -07:00
Calin Cristian Andrei 55d14090d0 [crun] add 1.4.5 and make it the default 2022-06-27 10:36:59 -07:00
rtsp da8498bb6f
[cert-manager] Upgrade to v1.8.2 (#9029) 2022-06-24 23:50:58 -07:00
orange-llajeanne b33896844e
apply calico bgp peer definition task to all nodes, but delegate to (#8974)
first control plane node
2022-06-24 19:42:57 -07:00
Calin Cristian Andrei ca212c08de [runc] drop hashes for 1.0.2 and 1.0.3 2022-06-23 09:23:43 -07:00
Calin Cristian Andrei 784439dccf [runc] make 1.1.3 the new default 2022-06-23 09:23:43 -07:00
Calin Cristian Andrei d818c1c6d9 [runc] add hashes for 1.1.3 2022-06-23 09:23:43 -07:00
Calin Cristian Andrei b9384ad913 [runc] add hashes for 1.1.2 2022-06-23 09:23:43 -07:00
Cristian Calin 76b0cbcb4e
bump pause container to 3.6 (#9024)
* [pod-infra] bump pod infra container version to 3.6

* [cri-dockerd] align pod infra container image with other CRIs
2022-06-23 01:43:44 -07:00
Florian Ruynat 6bf3306401
Fixed concatenate str & int in auto_renew_certificates_systemd_calendar var (#8979) 2022-06-22 11:55:43 -07:00
Robin Wallace bf477c24d3 Chnage from deprecated variable 2022-06-22 00:37:44 -07:00
Robin Wallace 79f6cd774a create snapshot-controller only if needed 2022-06-22 00:37:44 -07:00
Cyclinder c3c9a42502
support multus multi-architecture installation (#9012)
Signed-off-by: cyclinder qifeng.guo@daocloud.io
2022-06-21 10:56:26 -07:00
ERIK 4a92b7221a
add manage offline files script (#8956)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-06-21 03:49:43 -07:00
Sébastien Masset 9d5d945bdb
[MASTER] Add missing configuration for extra tolerations (#8908)
* Added new configuration item for extra tolerations in policy controllers

Signed-off-by: Sébastien Masset <smt.masset@gmail.com>

* Added new configuration item for extra tolerations in DNS autoscaler

Signed-off-by: Sébastien Masset <smt.masset@gmail.com>

* Aligned existing handling of extra DNS tolerations

Signed-off-by: Sébastien Masset <smt.masset@gmail.com>
2022-06-20 01:36:06 -07:00
Christoffer Anselm 475ce05979
Fix kubectl download for v1.23.8 amd64 (#9002)
kubectl_checksums for amd64 v1.23.8 was missing the last digit
2022-06-20 01:28:06 -07:00
Samuel 57d7029317
ansible_maxversion_exclusive (#8919) 2022-06-20 01:24:06 -07:00
Mohamed Zaian e4fe679916 [kubernetes] make v1.24.2 default 2022-06-17 11:08:33 -07:00
Mohamed Zaian 123632f5ed [kubernetes] add hashes for v1.22.11, v1.23.8 & v1.24.2 2022-06-17 11:08:33 -07:00
Calin Cristian Andrei 56d83c931b [CI] use debian-11 image with more disk space to ensure successful upgrade tests 2022-06-17 08:00:32 -07:00
Calin Cristian Andrei a22ae6143a [CI] ensure upgrade tests cover defaults (containerd currently) 2022-06-17 08:00:32 -07:00
Calin Cristian Andrei a1ec0571b2 [nerdctl] upgrade to 0.20.0 2022-06-17 08:00:32 -07:00
Calin Cristian Andrei 2db39d4856 [containerd] add hashes for 1.5.12, 1.5.13, 1.6.5 and 1.6.6 and make 1.6.6 the new default 2022-06-17 08:00:32 -07:00
Citrullin e7729daefc Add assertion for IPv6 in verify settings
Co-authored-by: Kenichi Omichi <ken1ohmichi@gmail.com>
2022-06-17 10:36:43 +02:00
Alessio Greggi 97b4d79ed5
feat: make kubernetes owner parametrized (#8952)
* feat: make kubernetes owner parametrized

* docs: update hardening guide with configuration for CIS 1.1.19

* fix: set etcd data directory permissions to be compliant to CIS 1.1.12
2022-06-17 01:34:32 -07:00
Kay Yan 890fad389d
suggest-to-use-nft-in-centos8 (#8987) 2022-06-17 01:30:32 -07:00
Kay Yan 0c203ece2d fix-broken-link-in-readme 2022-06-17 09:29:45 +02:00
Florian Ruynat 9e7f89d2a2 Remove forgotten 1.21 references 2022-06-16 08:55:38 +02:00
Calin Cristian Andrei 24c8ba832a [kubernetes] drop support for configuring insecure apiserver 2022-06-15 00:57:20 -07:00
Calin Cristian Andrei c2700266b0 [download] fix dependencies for downloads 2022-06-15 00:57:20 -07:00
Calin Cristian Andrei 2cd8c51a07 [kubeadm] use v1beta3 configuration version
* extra admission controls now don't have a version in their file names
  eventratelimit.v1beta2.yaml.j2 -> eventratelimit.yaml.j2
* cri_socket variable includes the unix:// prefix to be conformat with
  upstream
2022-06-15 00:57:20 -07:00
Calin Cristian Andrei 589823bdc1 [CI] remove docker stand-alone molecule test 2022-06-15 00:57:20 -07:00
Calin Cristian Andrei 5dc8be9aa2 [CI] kube 1.24 requires at least 1775Mi of memory, might as well leave the default of 2048 2022-06-15 00:57:20 -07:00
Calin Cristian Andrei fad296616c [docker] use cri-dockerd instead of dockershim for any kubernetes version deployed with docker as the container_manager 2022-06-15 00:57:20 -07:00
Calin Cristian Andrei ec01b40e85 [cri_dockerd] upgrade cri_dockerd to 0.2.2 for 1.24 compatibility
* use new artifact release name
* enable cri-dockerd dual setack support if enable_dual_stack_networks
2022-06-15 00:57:20 -07:00
Calin Cristian Andrei 2de5c4821c [calico] clean up workarounds for older versions 2022-06-15 00:57:20 -07:00
Calin Cristian Andrei 9efe145688 [calico] make 3.23.1 the default and drop 3.20.x and 3.19.x 2022-06-15 00:57:20 -07:00
Calin Cristian Andrei 51bc64fb35 [cri-o] support cri-o 1.24 with kube 1.24 2022-06-15 00:57:20 -07:00
Calin Cristian Andrei 6380483e8b [kubeconfig] generate admin kube config from /etc/kubernetes/admin.conf instead of the workaround of using kubeadm init phase kubeadm admin which fails with cri-dockerd 2022-06-15 00:57:20 -07:00
Calin Cristian Andrei ae1dcb031f [kubernetes] drop pre 1.22.0 workarounds 2022-06-15 00:57:20 -07:00
Calin Cristian Andrei 9535a41187 [kubernetes] make 1.22.0 the minimum version 2022-06-15 00:57:20 -07:00
Calin Cristian Andrei 47495c336b [kubernetes] drop hashes for 1.21.x 2022-06-15 00:57:20 -07:00
Calin Cristian Andrei d69d4a8303 [kubernetes] make 1.24.1 the new default 2022-06-15 00:57:20 -07:00
Kay Yan ab4d590547 add-ubuntu2204-in-readme 2022-06-15 09:51:59 +02:00
Kay Yan 85271fc2e5
add-ci-for-ubuntu2204 (#8958) 2022-06-15 00:47:19 -07:00
蒋航 f6159c5677
Update Dockerfile base image (#8975)
Signed-off-by: hang.jiang <hang.jiang@daocloud.io>
2022-06-14 15:15:36 -07:00
rtsp 668b9b026c
[cert-manager] Upgrade to v1.8.1 (#8976) 2022-06-14 15:11:34 -07:00
Viktor Jacynycz 77de7cb785
Expose calico-typha metrics port (#8855) 2022-06-14 07:17:33 -07:00
Dickson Tung e5d6c042a9
Fix regex for replacing http_proxy (#8957) 2022-06-14 07:07:34 -07:00
Ho Kim 3ae397019c
Add arm64 Flatcar OS's pypy bootstrapping (#8959)
- Upgrade pypy's python version to `3.9`
- Upgrade pypy`s version to `7.3.9`
2022-06-14 07:03:35 -07:00
Ho Kim 7d3e59cf2e
Remove unneeded socat installation for Flatcar (#8970) 2022-06-14 02:23:34 -07:00
orange-llajeanne 4eb83bb7f6
fixes for docker reset (#8966) 2022-06-14 02:15:34 -07:00
Florian Ruynat 1429ba9a07
Update docker version to 20.10.17 (#8965) 2022-06-14 02:11:33 -07:00
Ho Kim 889454f2bc
Fix typo in calico check (#8969) 2022-06-13 14:10:12 -07:00
orange-llajeanne 2fba94c5e5
fix a typo in the "matallb_auto_assign" variable name (#8949)
* fix a typo in the "matallb_auto_assign" variable name

* add metallb check to fail when deprecated "matallb_auto_assign" variable is defined
2022-06-13 09:40:12 -07:00
Kay Yan 4726a110fc
remove-support-for-ansible-2.9-2.10 (#8951) 2022-06-10 03:35:47 -07:00
Steffen Becker 6b43d6aff2
Proposed fix to Issue 8667 (#8944)
Proposed fix to Issue 8667

Proposed fix to Issue 8667
2022-06-09 23:37:46 -07:00
Kenichi Omichi 024a3ee551
Replace callback_whitelist with callbacks_enabled (#8759)
When running molecule jobs, we saw the folloing warning message:

 [DEPRECATION WARNING]: [defaults]callback_whitelist option, normalizing names
 to new standard, use callbacks_enabled instead. This feature will be removed
 from ansible-core in version 2.15. Deprecation warnings can be disabled by
 setting deprecation_warnings=False in ansible.cfg.

callbacks_enabled has been added since Ansible 2.11 and Kubespray is using
Ansible 2.12 at master branch. So we can use callbacks_enabled safely to
avoid the warning message.
2022-06-09 13:15:45 -07:00
Kenichi Omichi cd7381d8de
Drop Ansible support for v2.9 and v2.10 (#8925)
Ansible v2.9 and v2.10 are EOL as [1].
This drops those version supports by following the upstream Ansible.

This sets use_ssh_args true always because that is required to use
ssh_args on ansible.cfg on Ansible v2.11 or later[2].

ansible_ssh_host is replaced with ansible_host because ansible_ssh_host
has been deprecated already and cenots7 jobs were failed due to the
deprecated ansible_ssh_host.

[1]: https://docs.ansible.com/ansible/devel/reference_appendices/release_and_maintenance.html#ansible-core-changelogs
[2]: https://docs.ansible.com/ansible/latest/collections/ansible/posix/synchronize_module.html#parameter-use_ssh_args
2022-06-09 07:07:42 -07:00
Mathieu Parent f53764f949
calicoctl repo has been merged in calico (#8920) 2022-06-09 07:01:42 -07:00
Kenichi Omichi 57c3aa4560
Merge pull request #8943 from ErikJiang/update-etcd-download-url
update etcd download url in offline.yml
2022-06-08 08:09:48 -07:00
Mohamed Zaian bb530da5c2 [registry] Switch registry to use registry.k8s.io
Please see the conversation here: https://groups.google.com/a/kubernetes.io/g/dev/c/DYZYNQ_A6_c
2022-06-08 14:12:22 +02:00
Ilya Margolin cc6cbfbe71
Allow disabling calico CNI logs with calico_cni_log_file_path (#8921)
* Allow disabling calico CNI logs with calico_cni_log_file_path

Calico CNI logs up to 1G if it log a lot with current default settings:
log_file_max_size	100	Max file size in MB log files can reach before they are rotated.
log_file_max_age	30	Max age in days that old log files will be kept on the host before they are removed.
log_file_max_count	10	Max number of rotated log files allowed on the host before they are cleaned up.

See https://projectcalico.docs.tigera.io/reference/cni-plugin/configuration#logging

To save disk space, make the path configurable and allow disabling this log by setting
`calico_cni_log_file_path: false`

* Fix markdown

* Update roles/network_plugin/canal/templates/cni-canal.conflist.j2

Co-authored-by: Kenichi Omichi <ken1ohmichi@gmail.com>

Co-authored-by: Kenichi Omichi <ken1ohmichi@gmail.com>
2022-06-07 09:22:56 -07:00
bo.jiang 6f556f5451 update etcd download url in offline.yml
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-06-07 22:45:28 +08:00
Kenichi Omichi 9074bd297b
Update RELEASE.md (#8937)
If opening https://groups.google.com/g/kubernetes-dev we can see the
following message:

  As of January 2, 2022, this group will be sunset in favor of dev@kubernetes.io.

So this replaces kubernetes-dev@googlegroups.com with the new one.

In addition, this adds actual steps to know how to create container images easily.
2022-06-06 23:55:49 -07:00
mahjonp 8030e6f76c
fix 8893#issuecomment-1147154353 (#8933)
Signed-off-by: mahjonp <junpeng.man@gmail.com>
2022-06-06 12:40:21 -07:00
ERIK 27bd7fd737
update kubespray image tag in readme to v2.19.0 (#8934)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-06-06 10:24:21 -07:00
Ho Kim 77f436fa39
Fix: set fallback value of kubelet ip6 (#8858) (#8926)
* Fix: set fallback value of kubelet ip6 (#8858)

* Prune the spurious comma in the end of kubelet_address

- Update `roles/kubernetes/node/defaults/main.yml`

Co-authored-by: Cristian Calin <6627509+cristicalin@users.noreply.github.com>

* Fix: set fallback value of kubelet ip6 (#8858)

- Apply the lint: 132606368e

Co-authored-by: Cristian Calin <6627509+cristicalin@users.noreply.github.com>
2022-06-06 10:08:21 -07:00
Kenichi Omichi 814760ba25
Use blocks for macvlan tasks for each distribution (#8918)
For the code readability, this adds blocks for each distribution.
2022-06-06 07:50:24 -07:00
zhougw 14c0f368b6
the KUESPRAYDIR defined but never used (#8930)
* fix dir error

* the command line should align
2022-06-06 07:42:23 -07:00
Boris Barnier 0761659a43
Update Kube-router version to 1.5.0 (#8928)
https://github.com/cloudnativelabs/kube-router/releases/tag/v1.5.0
2022-06-06 07:38:34 -07:00
vanyasvl a4f752fb02
Add subjectAltName to calico-apiserver certificate (#8907)
* Add AltName to calico-apiserver certificate

* fix support for centos7 openssl
2022-06-06 07:38:23 -07:00
Mohamed Zaian b2346cdaec
[feat] Upgrade metrics server to v0.6.1 (#8909)
* Metrics Server now requires access to nodes/metrics RBAC resource instead of nodes/stats. See: https://github.com/kubernetes-sigs/metrics-server/releases/tag/v0.6.0
* Minimize rbac permissions.
2022-06-06 07:34:37 -07:00
Thearas 01ca7293f5
support reserve ephemeral-storage (#8895) 2022-06-06 07:34:26 -07:00
Florian Ruynat 4dfce51ded
Update dashboard to 2.6.0 (k8s 1.24 support) (#8906) 2022-06-06 16:47:33 +03:00
Kenichi Omichi f82ed24c03
Update KUBESPRAY_VERSION (#8922)
As a step of release process, this updates KUBESPRAY_VERSION.
Thank you so much for creating and pushing container images of
the new version floryut !
2022-06-05 22:08:20 +03:00
rtsp 1f65e6d3b5
[ingress-nginx] upgrade to 1.2.1 (#8904) 2022-06-01 00:23:10 -07:00
Kenichi Omichi 9bf7aaf6cd
Update RELEASE.md (#8884)
This updates RELEASE.md file to understand the release process
easily based on hands-on experience.
2022-06-01 00:23:03 -07:00
Max Gautier 5512465b34
Revert "Set exact user for Kubelet services" (#8872)
This reverts commit e375678674.

The workaround of explicitly specifying root for the kubelet unit was
for pulling images from private registry. Kubernetes now have a
dedicated mechanism with imagePullSecret.
2022-06-01 00:19:02 -07:00
Chris Ricker 2f30ab558a
Add 1.24 mappings for etcd and snapshot_controller (#8903)
Map appropriate versions of etcd and snapshot_controller containers with
k8s 1.24
2022-06-01 00:09:02 -07:00
Daniil Muidinov 5c136ae3af
[calico] add 3.22.3 and 3.23.1 (#8897)
* [calico]
* add 3.22.3 and 3.23.1
* set 3.22.3 default
* fix download crd for calico 3.22.3 and upper

* update calico README.md
2022-05-31 13:27:23 -07:00
mahjonp c927da00e0
Support cilium ip-masq-agent configuration (#8893)
* fix deploy Cilium with eBPF-based Masquerading failed

Signed-off-by: mahjonp <junpeng.man@gmail.com>

* forget to add the enable-ip-masq-agent flag

Signed-off-by: mahjonp <junpeng.man@gmail.com>
2022-05-31 09:26:53 -07:00
Samuel Liu 1600fd9082
clean up tags (#8880) 2022-05-31 07:52:53 -07:00
Samuel Liu 14acd124bc
fix containerd images downalod bugs (#8894) 2022-05-31 00:22:53 -07:00
rtsp e3cbbfb9ed
[kubernetes] make 1.23.7 the new default (#8888) 2022-05-29 17:08:51 -07:00
rtsp 5f21e0b58b
Update components version in README.md (#8886) 2022-05-29 14:10:51 -07:00
Alessio Greggi d22204a59f
docs: add hardening guide (#8868) 2022-05-29 12:36:50 -07:00
ERIK 90289b8502
add arch var in dockerfile (#8875)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-05-29 12:32:51 -07:00
Mohamed Zaian 78aacee21b
[kubernetes] add hashes for 1.24.1 and other versions. (#8876)
* [kubernetes] add hashes for 1.24.1 and other versions.
versions: v1.21.13, v1.22.10, v1.23.7 & v1.24.1

* [kubernetes] make v1.23.7 default1
2022-05-27 12:00:42 -07:00
Gleb Galkin f47aca3558
Added |bool for rhel_enable_repos (#8871) 2022-05-26 18:51:55 -07:00
Kenichi Omichi 73fc70dbe8
Delete kube_version v1.20- related code (#8869)
Current Kubespray supports the Kubernetes version 1.21 or upper with
`kube_version_min_required: v1.21.0`

Then kube_version v1.20- related code is not used at all.
This deletes those code for cleanup.
2022-05-25 21:31:22 -07:00
Kenichi Omichi dc2a18e436
Merge pull request #8815 from simplekube-ro/dont_clobber_calico
[calico] don't clobber calico options set by the user
2022-05-24 10:25:48 -07:00
Thearas 82590eb087
fix remove docker-ce.repo failed (#8856) 2022-05-24 05:44:06 -07:00
Ross Kusler 4c97ce747c
Adding support for the kube-router flag --cluster-asn flag (#8837) 2022-05-23 16:39:10 -07:00
Samuel Liu ebbc5ed0ce
add liupeng0518 to reviewers (#8853) 2022-05-23 21:42:14 +03:00
Necatican Yıldırım dc1af5a9c5
[etcd] Add support for setting the request size limit (#8849)
* [etcd] Add extra documentation for `etcd_memory_limit` and `etcd_quota_backend_bytes`

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* [etcd] Add support for setting ETCD_MAX_REQUEST_BYTES

Signed-off-by: necatican <necaticanyildirim@gmail.com>
2022-05-23 09:36:03 -07:00
irizzant 85bd1eea27
fix(calico): add missing "get" verb (#8847)
Signed-off-by: irizzant <i.rizzante@gmail.com>
2022-05-21 01:20:00 -07:00
Necatican Yıldırım 2b151c6aa2
cni-plugins: upgrade to 1.1.1 (#8852)
Signed-off-by: necatican <necaticanyildirim@gmail.com>
2022-05-21 11:14:16 +03:00
David Louks 93fe3e06ef
Add support for including annotations on aws-ebs-csi-controller (#8779)
* Add support for including annotations on aws-ebs-csi-controller

* update comment to specify role arn
2022-05-20 15:00:00 -07:00
Tamas Pasztor 9d3a894991
Possible remove ippools from cni config (#8845)
* Possible remove ippools from cni config

* Typo

* Update roles/network_plugin/calico/templates/cni-calico.conflist.j2

Co-authored-by: Kenichi Omichi <ken1ohmichi@gmail.com>

* Update cni-calico.conflist.j2

Incorrectly deleted calico forwarding content.

* Update roles/network_plugin/calico/templates/cni-calico.conflist.j2

Co-authored-by: Kenichi Omichi <ken1ohmichi@gmail.com>

Co-authored-by: Kenichi Omichi <ken1ohmichi@gmail.com>
2022-05-19 23:45:13 -07:00
Kenichi Omichi 0e6b727e53
Update docs for using venv (#8842)
Due many patterns of Linux distributions, it is difficult to install
ansible dependencies as system-wide stably.
Apart of Kubespray doc[1] recommends to use venv to avoid such issue,
and this applies venv usage to the other parts of the doc.

[1]: https://github.com/kubernetes-sigs/kubespray/blob/master/docs/setting-up-your-first-cluster.md#set-up-kubespray
2022-05-19 23:39:12 -07:00
Andrey e42a01f203
Fixed systemd-networkd restart for ubuntu 22.04, when using reset.yml (#8841)
* Fixed systemd-networkd restart  for ubuntu 22.04

* fixed systemd-networkd restart for all Ubuntu
2022-05-20 09:34:53 +03:00
Samuel Liu a28b58dbd0
[calico]use ipamconfig instead of calico ipam command (#8839)
* use ipamconfig instead of calico ipam command

* fix ansible lint
2022-05-19 11:13:20 -07:00
orange-llajeanne a26a9ee14f
set apparmor_enabled in netchecker task (#8844) 2022-05-19 10:49:21 -07:00
Kenichi Omichi c09fcd4f92
Skip gathering facts when reset_nodes is false (#8843)
The doc[1] explains we need to specify

  "-e reset_nodes=false -e allow_ungraceful_removal=true"

to delete offline node. However the task "Gather facts"
tried to gather facts of offline node also and the task
was failed.
This adds a condition to skip gathering facts when reset_nodes
is false on remove-node.yml.

[1]: https://github.com/kubernetes-sigs/kubespray/blob/master/docs/nodes.md#3-remove-an-old-node-with-remove-nodeyml
2022-05-19 01:04:07 -07:00
Samuel Liu 593359ec77
fix kube-ovn image (#8838) 2022-05-18 08:36:53 -07:00
Maxime Guyot 34ec4d5d40
Move woopstar to emeritus approver (#8809) 2022-05-18 02:36:53 -07:00
Kay Yan 3d8f3bc0b7
Fix the invalid kube vip manifest (#8831)
* add Feature synchronized time checking

* fix-invalid-kube-vip-manifest
2022-05-17 23:48:55 -07:00
Samuel Liu eea7bb7692
only need run this once (#8833)
calicoctl ipam xx
calicoctl apply xx
2022-05-17 09:52:27 -07:00
Cristian Calin 3a89e31dee
[ansible] update ansible and cryptography requirements to work on ubuntu 22.04 (#8826) 2022-05-16 11:14:17 -07:00
Cristian Calin 0c504e4984
[docs] document support for ansible versions (#8827)
drop note about not supporting ansible 2.9 since we still cover it in
nightly CI
2022-05-16 00:50:17 -07:00
Kenichi Omichi 0bf070c33b
doc: write how to use kata-container for pods (#8817)
kata-container is not used by default even if enabling kata_containers_enabled.
This updates the doc for writing how to do that.
2022-05-13 23:15:18 -07:00
Cyclinder dc8ad78206
fix: incorrect condition type (#8822)
Signed-off-by: cyclinder qifeng.guo@daocloud.io
2022-05-13 14:09:56 -07:00
ERIK 48e938660d
Allow replacement of address prefixes for all images (#8764)
Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
2022-05-13 09:23:14 +03:00
Mohamed Zaian 632d457f78
[ingress-nginx] upgrade to 1.2.0 (#8814) 2022-05-12 09:07:14 -07:00
Calin Cristian Andrei 569a319ff5 [calico] don't clobber user set bgp configuration options that are not managed by kubespray 2022-05-12 15:50:38 +00:00
Calin Cristian Andrei 47812ec002 [calico] don't clobber user set ippool options that are not managed by kubespray 2022-05-12 15:50:05 +00:00
Calin Cristian Andrei c27dee57ea [calico] don't clobber user set felixconfig options that are not managed by kubespray 2022-05-12 15:49:24 +00:00
weizhoublue b289f533b3
get wrong server name of coredns (#8811)
Signed-off-by: weizhou.lan@daocloud.io <weizhou.lan@daocloud.io>
2022-05-12 08:33:14 -07:00
Cyclinder 3eb0a4071a
set default value of name to "k8s-pod-network" (#8813)
Signed-off-by: cyclinder qifeng.guo@daocloud.io
2022-05-12 08:29:14 -07:00
Oogy 5684610a55
Support metallb peer password (#8792)
* support metallb peer password

* add MetalLB BGP password example
2022-05-11 21:39:15 -07:00
Samuel Liu f26f544ff6
[kube-ovn]: update kube-ovn version and sync some feature (#8790)
* [kube-ovn]: some feature

kube-ovn vlan mode
ipv6/ipv4 dual stack
...

* remove unused env

* fix readinessprobe
2022-05-11 21:35:15 -07:00
Ajarmar b9e5b0cb53
UpCloud server plan, firewall, load balancer integration (#8758)
* [upcloud] add option to use preconfigured cpu/mem plan

* [upcloud] add option to use firewall rules for API server/SSH access

* [upcloud] add option to use managed load balancer
2022-05-11 10:15:03 -07:00
Necatican Yıldırım 13443b05a6
Overhaul Cilium manifests to match the newer versions (#8717)
* [cilium] Separate templates for cilium, cilium-operator, and hubble installations

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* [cilium] Update cilium-operator templates

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* [cilium] Allow using custom args and mounting extra volumes for the Cilium Operator

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* [cilium] Update the cilium configmap to filter out the deprecated variables, and add the new variables

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* [cilium] Add an option to use Wireguard encryption on Cilium 1.10 and up

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* [cilium] Update cilium-agent templates

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* [cilium] Bump Cilium version to 1.11.3

Signed-off-by: necatican <necaticanyildirim@gmail.com>
2022-05-11 06:23:04 -07:00
Andrew Zagorodnuk e70c00a0fe
fix: Waiting until Volumes will be detached from the node on graceful node removal (#8739) 2022-05-10 09:57:43 -07:00
spaced bb67b654c5
local volume provisioner should not run on control plane nodes by default (#8805) 2022-05-10 19:04:24 +03:00
Kenichi Omichi aef25819bc
nit: Add offline note for kube-* images (#8718) 2022-05-10 06:41:44 -07:00
weizhoublue 1d96f465f4
arm64 support of cilium (#8803)
when cilium v1.10 , it is ok to support arm64
https://cilium.io/blog/2021/05/20/cilium-110

Signed-off-by: weizhou.lan@daocloud.io <weizhou.lan@daocloud.io>
2022-05-10 02:55:43 -07:00
emiran-orange 8f618ab408
Fix condition on kata_containers_version/kube_version when kata_containers_enabled is false (#8804) 2022-05-09 14:56:32 -07:00
Hugo Blom 5296d7ef9c
Added playbook to wait for cloud-init to finish (#8799) 2022-05-09 10:49:19 -07:00
Robin Wallace b715500b48
csi: bump upcloud csi driver (#8784) 2022-05-09 10:43:19 -07:00
Alessio Greggi 37a5271f5a
feat: add variables to manage makeIPTablesUtilChains and streamingConnectionIdleTimeout kubelet parameters (#8796) 2022-05-09 09:25:19 -07:00
Robin Wallace 42fc71fafa
[PodSecurityPolicy] Move the install of psp (#8744) 2022-05-09 09:21:19 -07:00
Victor Morales 02b6e4833a
Update Kata Containers runtime (#8797)
* Update Kata containers binary to 2.4.1 version

* Update overhead kata runtime values

* Fix kata-qemu default values in CRI-O
2022-05-08 17:01:18 -07:00
Andy 323a111362
[kubelet] set correct resolv.conf for Ubuntu 22.04 (#8795) 2022-05-06 16:31:04 -07:00
Alessio Greggi e7df4d3dd9
add support for service-account-lookup parameter (#8781)
* feat: add variable to manage service-account-lookup on kube-apiserver

* docs: add documentation about service-account-lookup variable
2022-05-06 00:39:07 -07:00
David Louks 3e52a0db95
Add optional setting for ca data in auth webhook (#8777)
* Add optional setting for ca data in auth webhook

* add webhook token auth variables to sample inventory
2022-05-05 14:52:43 -07:00
Cristian Calin 94484873d1
[containerd] add 1.6.4 which is needed for kubernetes 1.24.0 and make it the default (#8791) 2022-05-05 14:10:43 -07:00
Elif Akyıldırım 0d6ea85167
Assert that IP range is enough for the nodes (#8720)
* Assert that IP range is enough for the nodes 

Co-authored-by: Necatican Yıldırım <necaticanyildirim@gmail.com>

* Fixed whitespace

* Fixed errors

* Fixed errors

Co-authored-by: Necatican Yıldırım <necaticanyildirim@gmail.com>
2022-05-05 08:48:20 -07:00
Florian Ruynat 674ec92224
Add crictl 1.24 for new k8s version (#8787) 2022-05-05 08:40:22 -07:00
Victor Morales e7e5037a86
Add a container_manager validation (#8785) 2022-05-04 23:58:19 -07:00
Kenichi Omichi fbcf426240
Drop containerd 1.4 support (#8780)
The version 1.4 of containerd has been End of Life since March 3, 2022
as https://containerd.io/releases/#support-horizon
It is nice to drop the support from Kubespray also to follow containerd.
2022-05-04 23:02:20 -07:00
Mohamed Zaian 2301554e98
[kubernetes] add hashes for 1.24.0 (#8783) 2022-05-04 22:58:21 -07:00
Calin Cristian Andrei 5bc35002ba [remove-etcd-node] fix json path query 2022-05-04 06:35:51 -07:00
Calin Cristian Andrei 9143810a4d [CI] add remove node job 2022-05-04 06:35:51 -07:00
Calin Cristian Andrei 8f118fb619 [reset] fix task inclusion logic for network plugin 2022-05-04 06:35:51 -07:00
Calin Cristian Andrei 1113460b68 [cri-o] molecule switch from ubuntu 18 to ubuntu 20 2022-05-04 14:46:17 +02:00
Florian Ruynat 74c7e009b7
Move flannel to kubespray/quay for CI (#8774) 2022-05-04 00:11:30 -07:00
Lubos Mercl c20ab7d987
add fix for GCP CSI driver (#8616)
Signed-off-by: Lubos Mercl <lubos.mercl@gmail.com>
2022-05-03 08:55:56 -07:00
Robin Wallace fe66121287
[Openstack] master foreach and fixes (#8709)
* [openstack] fix for new network modules

* [openstack] for-each master nodes
2022-05-03 08:51:56 -07:00
Cristian Calin 9605bbaa67
[nerdctl] upgrade to 0.19.0 (#8772) 2022-05-03 05:39:56 -07:00
Cristian Calin b7ce6a9f79
[ansible] upgrade to 5.7 (#8771) 2022-05-03 01:29:55 -07:00
Kenichi Omichi c04a73c11a
Update containerd version to 1.6.3 (#8770)
containerd version 1.6.3 has been released as [1]
This adds the checksums and makes Kubespray use it.

[1]: https://github.com/containerd/containerd/releases/tag/v1.6.3
2022-05-02 22:43:55 -07:00
Kenichi Omichi f184725c5f
Use ansible 2.12 for testcases_prepare (#8763)
tests/requirements.txt links to tests/requirements-2.12.txt, so
Kubespray uses ansible 2.12 by default for testing. However we
forgot to update testcases_prepare.sh to use ansible 2.12.
This updates testcases_prepare to use ansible 2.12.
2022-05-02 11:34:31 -07:00
bilalcaliskan 26a0b0f1e8
chore(flannel): change flannel repository and upgrade image version (#8740)
* chore: change flannel repository and upgrade image version

* docs: upgrade flanneld version
2022-05-02 11:29:14 -07:00
Alessio Greggi fa1d222eee
add support for EventRateLimit plugin configuration (#8711)
* feat: add support for EventRateLimit admission plugin

* docs: add documentation about admission_control_config_file and EventRateLimit configuration
2022-05-02 11:03:15 -07:00
Cristian Calin 56cf163a23
[kubernetes] actually make 1.23.6 the default (#8767) 2022-05-02 00:43:14 -07:00
Mohamed Zaian afcedf6d77
Pull master, Rebase, add changes again (#8745) 2022-05-02 00:39:14 -07:00
Chris Ricker 21fc197ee0
Ensure containerd service unmasking (#8726)
* Force containerd service unmasking

Force systemd to unmask and start service when adding containerd service

* Eliminate restart and move unmasking step

Switch to start instead of restart
Move unmasking to restart handler

* Add unmasking to similar container runtimes

* Add missing service names
2022-04-29 08:39:14 -07:00
Calin Cristian Andrei fcb4c8fb61 [kubernetes] make 1.23.6 the new default 2022-04-29 07:57:13 -07:00
Calin Cristian Andrei b6e2c56ae6 [kubernetes] add hashes for 1.21.12 2022-04-29 07:57:13 -07:00
Calin Cristian Andrei b005985d4e [kubernetes] add hashes for 1.23.6 2022-04-29 07:57:13 -07:00
Samuel Liu 1294fd5730
check calico ipv6 (#8738)
* check calico ipv6

* just check ipip mode for ipv6
2022-04-29 00:35:13 -07:00
Cristian Calin 835fd86a08
[CI] split molecule testes to run in parallel (#8756)
* add parametrization to molecule_run.sh

* [CI] split molecule tests to allow parallelization of work
2022-04-29 00:09:12 -07:00
Mohamed Zaian b7004d72c5
[kubernetes] add hashes for 1.22.9 (#8746)
* [kubernetes] add hashes for 1.22.9
2022-04-28 16:10:50 +03:00
Kenichi Omichi eb566ca626
Remove aufs-tools from Ubuntu requirement (#8754)
aufs-tools was required for docker.io package originally,
but Kubespray installs docker-ce package instead today.
In addition, Ubuntu 20.04 doesn't provide aufs-tools as [1].
Then this removes aufs-tools from Ubuntu requirement.

[1]: https://bugs.launchpad.net/ubuntu/+source/aufs-tools/+bug/1947004
2022-04-27 23:04:55 -07:00
Cristian Calin aa12f1c56b
[CI] fix packet_ubuntu20-calico-etcd-kubeadm-upgrade-ha job (#8752) 2022-04-27 12:39:36 -07:00
Cristian Calin 6cc5b38a2e
[terraform] use modern day equinix metal provider (#8748)
* [terraform] use modern day equinix metal provider

* [CI] ensure packet job tests metal
2022-04-27 10:34:13 -07:00
Mathieu Parent e6c4330e4e
calico: vxlan is the default for calico_network_backend (#8750)
Since https://github.com/kubernetes-sigs/kubespray/pull/8434
2022-04-27 02:24:11 -07:00
Kenichi Omichi 1e827f9807
Update kata-containers.md (#8747)
* kata container related options exist in k8s-cluster.yml,
  not k8s_cluster.yml

* https://github.com/kata-containers/runtime has been archived and
  https://github.com/kata-containers/kata-containers is used today.
2022-04-26 07:06:53 -07:00
Olle Larsson a4f26dc8f3
[terraform/openstack] add safespring to provider list (#8735) 2022-04-25 04:43:39 -07:00
Mulugeta Ayalew Tamiru 3f065918d9
Update verbs for volumeattachments resource (#8731)
* Update verbs for volumeattachments resource

Update verbs for volumeattachments resource so that the kubelet can create volumeattachments and mount volumes when deploying Kubernetes on VMware vSphere.

* Update verbs for volumeattachments resource

Update verbs for volumeattachments resource to match upstream

* Update vsphere-csi-controller-rbac.yml.j2
2022-04-22 00:04:13 -07:00
Cristian Calin 2c2d4513ac
[helm] upgrade to 3.8.2 (#8723) 2022-04-18 12:51:50 -07:00
zhengtianbao 937e64d296
Update flannel use install-cni-plugin to fit upstream (#8714)
* Update flannel use install-cni-plugin to fit upstream

* Replace flannel cni repo

* Remove download flannel binary
2022-04-18 09:44:41 -07:00
Cristian Calin 3261d26181
[etcd] ensure etcd is properly upgraded when managed by kubeadm (#8722)
* [etcd] ensure etcd is properly upgraded when managed by kubeadm

* [CI] add periodic job to test upgrade of etcd managed by kubeadm
2022-04-17 10:32:41 -07:00
Mathieu Parent c98a0a448f
metallb: Add images to downloads (#8715)
For offline mode
2022-04-14 10:06:46 -07:00
Mohamed Zaian 7e7218f5ce
etcd: add etcd v3.5.3 for kubernetes 1.21+ (#8712)
* As per this issue https://github.com/kubernetes-sigs/kubespray/pull/8664 I propose to make etcd v.3.5.3 default for any kubernetes version which uses 3.5.x since that 3.5.[0-2] not recommended for production.
2022-04-14 05:48:46 -07:00
Cristian Calin 45262da726
[calico] call calico checks early on to prevent altering the cluster with bad configuration (#8707) 2022-04-14 01:08:46 -07:00
Florian Ruynat aef5f1e139 Add tz to kubespray image 2022-04-13 08:22:45 +02:00
SOPHAL HONG 3d4baea01c
Add tag to AWS VPC subnets for automatic subnet discovery by load balancers or ingress controllers (#8705) 2022-04-12 10:05:23 -07:00
Julien Le Fur 30306d6ec7
Enable external CA mode for control-plane deployment (#8620) 2022-04-12 05:47:23 -07:00
Robin Wallace d7254eead6
UpCloud integration (#8653)
* [upcloud] add upcloud csi-driver

* Option to use ansible_host as api ip for kubueconfig
2022-04-11 15:13:23 -07:00
Anthony Bible 9dced7133c
Fixes for Hetzner terraform and Hetzner Cloud (#8702)
* - add ability to specify the network_zone in hetzner terraform
- Export the network id from hetzner terraform the the generated inventory.ini

* - Add with_networks variable to allow different deployments of hcloud controller manager

- Add network id to hcloud controller secret (added via the inventory)

- Don't include extra_args if it's not set
2022-04-11 10:26:06 -07:00
Kenichi Omichi c2fb1a0747
Add VAGRANT_ANSIBLE_TAGS for normal deployment (#8697)
Current ansible.tags 'facts' is for skipping actual Kubespray deployment
at vagrant CI because the deployment takes much time. However the static
'facts' skips the deployment for normal usage of vagrant also.
That causes confusions.

This adds VAGRANT_ANSIBLE_TAGS to skip the deployment for vagrant CI.
2022-04-08 23:58:04 -07:00
Thomas Eberle 00a4d2d3c4
Removed quotation of nerdctl_extra_flags. (#8695)
The quotations in the variable nerdctl_extra_flags are not required for the `nerdctl_image_pull_command` and throw the following error when executing the cluster-playbook with `container_insecure_registries` set:
        unknown flag: --insecure-registry\\\"
This happens as the complete nerdctl_image_pull_command string variable gets split into an array string for the cmd task. The escaped quotation doesn't get escaped properly and is added to the cmd-string array as part of the command. This leads to a wrong written insecure-registry flag, which throws this error.
2022-04-08 08:02:43 -07:00
Samuel Liu 424ef3b3f9
[calico] add calico apiserver (#8690)
* [calico] add calico apiserver

* fix yamllint

* remove addext argument

* Configure API server with the CA bundle

* add check kdd
2022-04-08 00:02:42 -07:00
Mathieu Parent 996ef98b87
Add support for kube-vip (#8669)
Signed-off-by: Mathieu Parent <math.parent@gmail.com>
2022-04-07 10:37:57 -07:00
Unai Arríen 19d5a1c7c3
Ensure all Kubelet required kernel values are configured when enabling protectKernelDefaults (#8692) 2022-04-07 08:33:59 -07:00
rtsp 0481dd946f
[cert-manager] Upgrade to v1.8.0 (#8688) 2022-04-06 00:52:57 -07:00
cyril-corbon 29109575f5
fix: reset docker was not removing docker properly (#8680)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-04-05 21:36:55 -07:00
emiran-orange 3782573ede
Single quotes are missing in jsonpath argument of kubectl get node (#8683) 2022-04-05 09:45:38 -07:00
Alessio Greggi bba91a7524
split kube_feature_gates variable for different kubernetes components (#8677)
* feat: split kube_feature_gates variable for different kubernetes components

* docs: add kube_feaute_gates componet variables
2022-04-05 05:39:37 -07:00
Cristian Calin b67cadf743
[crun] upgrade to 1.4.4 (#8675) 2022-04-04 23:57:36 -07:00
cyril-corbon 56dda4392c
[validate-container-engine] check if kubelet is present was not working (#8679)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-04-04 09:34:12 -07:00
Cristian Calin 34fec09ff1
[containerd] upgrade versions to address CVE-2022-24769 (#8671)
* [containerd] add hashes for 1.5.11

* [containerd] add hashes for 1.6.2

* [containerd] make 1.6.2 the new default
2022-04-04 05:30:11 -07:00
Cristian Calin cefd1339fc
[vsphere_csi] update to 2.5.1 and make external_vsphere_version 7.0u1 by default (#8676) 2022-04-04 01:08:11 -07:00
Cristian Calin b915376194
[runc] upgrade to 1.1.1 (#8674) 2022-04-04 00:42:23 -07:00
Cristian Calin 455cc6ff75
[nerdctl] upgrade to 0.18.0 (#8672) 2022-04-04 00:42:11 -07:00
Cristian Calin cc9c376d0f
[validate-container-engine] add facts tag to tasks needed for vagrant jobs (#8678) 2022-04-04 00:32:11 -07:00
Kenichi Omichi 018611f829
Fix quotation of nerdctl_extra_flags (#8668)
Due to missing quotation of nerdctl_extra_flags, ansible-playbook was failed:

  Using module file /usr/local/lib/python3.6/dist-packages/ansible/modules/command.py
  Pipelining is enabled.
    [..]
    File "/usr/lib/python3.8/shlex.py", line 191, in read_token
      raise ValueError("No closing quotation")

This fixes the issue.

T-Eberle investigated the issue and found the solution.
Thank you T-Eberle!
2022-04-02 10:56:09 -07:00
cyril-corbon 1781eab21f
fix: uninstall contailer engine if service is running (#8662) 2022-04-01 09:20:46 -07:00
190ikp 78b05d0ffc
fix disk controller type in Vagrantfile (#8656) 2022-03-31 10:51:01 -07:00
Florian Ruynat 1c0df78278
Add ETCD_EXPERIMENTAL_INITIAL_CORRUPT_CHECK flag to etcd config (#8664) 2022-03-31 08:17:01 -07:00
Kenichi Omichi 6cc9da6b0a
Update vagrant.md (#8663)
To read it easily, this puts new lines.
2022-03-31 00:07:00 -07:00
Florian Ruynat 6af9cae0a5
Add missing 2.10 ansible test (#8665) 2022-03-30 08:12:27 -07:00
Cristian Calin ef29455652
[ansible] make ansible 5.x the new default version (#8660)
* [ansible] make ansible 5.x the new default version and move different versions tested to nightly jobs

* [CI] jobs were missing proper ansible cleanup
2022-03-29 15:36:11 -07:00
Kenichi Omichi 503ab0f722
Run 0100-dhclient-hooks if dhcpclient is enabled (#8658)
If running Kubespray on static IP environments, a task was failed like:

  TASK [kubernetes/preinstall : Configure dhclient hooks for resolv.conf (RH-only)]
  fatal: [ak8s2]: FAILED! => {
    "changed": false, "checksum": "..",
    "msg": "Destination directory /etc/dhcp/dhclient.d does not exist"}

This adds a check for dhclientconffile for running 0100-dhclient-hooks to
run the task only if dhcpclient is enabled.
2022-03-29 00:11:11 -07:00
Christian Rohmann 90883e76af
terrform/openstack: Fix templating of ansible_ssh_common_args in no_floating.yml if used as TF module (#8646)
* terraform/openstack: Use path.module for ansible_bastion_template.txt

This extends on #7643 by not using path.root, but switching to path.module
to allow use of the terraform code as a module itself. This change then keeps
all calls to the template file stable even for that use-case.

* terraform/openstack: Make sed calls fail on errors

By using a single call with two replacements to use of sed will create proper exit codes
and allowing for errors to be recognized by terraform.
2022-03-29 00:07:11 -07:00
Cristian Calin 113de8381c
[ansible] add support for ansible 5 (ansible-core 2.12) (#8512) 2022-03-28 08:49:22 -07:00
Calin Cristian Andrei 652f2edbe1 [etcd] add 0 hash for arm v3.5.2 to prevent deployment failures 2022-03-28 08:40:30 +02:00
rtsp a67e36703f
Update cert-manager to v1.7.2 (#8648) 2022-03-26 04:53:22 -07:00
Samuel Liu 73c6943402
fix vagrant parameter (#8650) 2022-03-25 18:57:58 -07:00
Florian Ruynat d46817d690 Remove centos7 molecule while opensuse mirror is flaky 2022-03-25 16:57:58 -07:00
Florian Ruynat 97cb64c62d Remove k8s module for ns creation 2022-03-25 16:57:58 -07:00
Florian Ruynat 3f70241fb7 Update kubernetes image to 2.18.1 2022-03-25 16:57:58 -07:00
Maciej Wereski 21b71b38a3
Vagrantfile: add var to set ansible verbosity level (#8639)
Signed-off-by: Maciej Wereski <m.wereski@partner.samsung.com>
2022-03-22 06:11:44 -07:00
Erwan Miran b2f9442aba
Have ingress_controller and external_provisioner in upgrade-cluster.yml (#8640) 2022-03-22 05:43:43 -07:00
Cristian Calin fa9f85c7e9
[sysctl] set fs.may_detach_mounts=1 even when CRIs don't set it themselves (#8635) 2022-03-21 17:36:13 -07:00
Fredrik Liv ffa285c2e7
Fixed cluster roles for openstack cloud controller (#8638) 2022-03-21 06:19:21 -07:00
Kenichi Omichi 7b1dc600d5
Fix the condition of drain on pre-remove task (#8634)
When running cluster.yml for new machines what containerd is already
install but Kubernetes cluster were not installed before, the task
"remove-node | List nodes" is failed like

  "changed": false,
  "cmd": [
    "/usr/local/bin/kubectl", "--kubeconfig",
    "/etc/kubernetes/admin.conf", "get", "nodes", "-o",
    "go-template={{ range .items }}{{ .metadata.name }}
    {{ "\n" }}{{ end }}"
   ],
   ..
   "stderr": "error: stat /etc/kubernetes/admin.conf: no such file or directory",

That was due to lack to check the existing Kubernetes cluster exists
or not before running "kubectl drain" command.
This adds the check to avoid the issue.
2022-03-21 01:39:10 -07:00
Cristian Calin 5e67ebeb9e
[container image] use focal (ubuntu 20.04) base image for our docker builds (#8631) 2022-03-18 09:58:41 -07:00
Fredrik Liv af7066d33c
Updated openstack cloud controller version to v1.22.0 (#8629)
* Updated openstack cloud controller version to match kubernetes version

* Rolled back file structure change
2022-03-18 01:47:16 -07:00
Cristian Calin dd2d95ecdf
[calico] don't enable ipip encapsulation by default and use vxlan in CI (#8434)
* [calico] make vxlan encapsulation the default

* don't enable ipip encapsulation by default
* set calico_network_backend by default to vxlan
* update sample inventory and documentation

* [CI] pin default calico parameters for upgrade tests to ensure proper upgrade

* [CI] improve netchecker connectivity testing

* [CI] show logs for tests

* [calico] tweak task name

* [CI] Don't run the provisioner from vagrant since we run it in testcases_run.sh

* [CI] move kube-router tests to vagrant to avoid network connectivity issues during netchecker check

* service proxy mode still fails connectivity tests so keeping it manual mode

* [kube-router] account for containerd use-case
2022-03-17 18:05:39 -07:00
Sergey a86d9bd8e8
do not remove package in validate container engine role when Fedora CoreOS distr (#8626) 2022-03-17 06:49:20 -07:00
Calin Cristian Andrei 21b1516d80 [kubernetes] add hashes for 1.21.11 2022-03-17 05:03:20 -07:00
Calin Cristian Andrei 4c15038194 [kubernetes] add hashes for 1.22.8 2022-03-17 05:03:20 -07:00
Calin Cristian Andrei 538f9df5cc [kubernetes] make 1.23.5 the default 2022-03-17 05:03:20 -07:00
Calin Cristian Andrei efb0412b63 [kubernetes] add hashes for 1.23.5 2022-03-17 05:03:20 -07:00
Qasim Mehmood 5a486a5cca
Calico: Fix Wireguard support for CentOS Stream 9/RHEL 9 Beta (#8625) 2022-03-17 04:11:20 -07:00
Cristian Calin 394857b5ce
[docker] add support for cri-dockerd as a replacement for dockershim (#8623) 2022-03-16 16:28:11 -07:00
Cristian Calin 5043517cfb
[containerd] avoid cleanup of /usr/bin on ostree distributions (#8624) 2022-03-15 13:47:48 -07:00
Max Gautier 307d122a84
Helm-apps role for installing helm charts (#8347)
* Sketch of helm-apps role interface

* helm-apps: Early implementation and settings

* helm-apps: Fix README.md example playbook

* fixup! Sketch of helm-apps role interface

* Make the argument specs more explicit

* Remove exposed options from hardcoded default

* Simplify example playbook in README.md

- Define directly the roles parameters
- Add an example of option override for one chart only

* Use release instead of charts

Make explicit that the role is mananing releases, not charts.
Simplify parameters naming
2022-03-14 08:29:58 -07:00
onock d444a2fb83
[systemd-resolved] Fix DNS configuration according to docs/dns-stack.md and during reset of cluster (#8560) (#8561) 2022-03-14 02:08:22 -07:00
Kenichi Omichi fb7c56e3d3
Add unit test for print_hostnames of inventory.py (#8558)
This adds a unit test for the function.
2022-03-12 23:40:23 -08:00
spaced 2b79be68e7
fix typo and duplicated declaration of ingressclasses (#8591) 2022-03-12 23:36:23 -08:00
Mac Chaffee 512d5e3348
Restart etcd if the etcd version changes (#8556)
Signed-off-by: Mac Chaffee <me@macchaffee.com>
2022-03-11 18:08:23 -08:00
Unai Arríen 4b6892ece9
Add epoch to docker-ce and docker-ce-cli packages to ensure docker up… (#8618)
* Add epoch to docker-ce and docker-ce-cli packages to ensure docker upgrade

* Split container-engine redhat vars to support legacy RHEL 7 version management

* Support ansible_distribution_major_version when disvering vars with ansible_os_family
2022-03-11 02:45:07 -08:00
Toni Tauro 5a49ac52f9
feat(calico): add configurable ipam strictaffinity (#8581)
Signed-off-by: Toni Tauro <toni.tauro@adfinis.com>
2022-03-07 22:58:33 -08:00
Cristian Calin db1e30e4fc
[calico] add 3.22.1 (#8612) 2022-03-07 22:54:34 -08:00
Cristian Calin b4a61370c8
[cri-o] add cri-0 1.23.x (#8599) 2022-03-07 05:39:07 -08:00
kakkotetsu 58b2f39ce5
add IPv6 listen directive to nginx if enable_dual_stack_networks (#8596) 2022-03-07 05:39:00 -08:00
Tom Janson 56d882abed
Clarify confirmation prompt (#8589)
Entering any value causes the play to proceed, e.g., entering "no<Enter>". (This is simply how Ansible's pause module behaves.)
2022-03-07 05:38:54 -08:00
Takuya Murakami 39acb2b84d
Update ansible-lint to 5.4.0 (#8607) (#8608)
* Update ansible-lint to 5.4.0 (#8607)

It seems that the Rich version 11.0.0 has a breaking change.
So need to update ansible-lint to 5.3.2 or later.

* Fix for ansible-lint no-changed-when rule (#8607)
2022-03-07 05:35:55 -08:00
Branko Mijuskovic 3ccba08983
Fix crio_packages for Rocky8 (#8594) 2022-03-07 05:29:05 -08:00
Mohamed Zaian 632aa764e6
etcd: add etcd v3.5.1 for kubernetes 1.22+ (#8588)
* There is an issue with etcd v3.5.0 where it resurrects ancient members see: https://github.com/etcd-io/etcd/issues/13196
This issue is clearly fixed in etcd v3.5.2

* Just keep the checksums
2022-03-07 05:28:54 -08:00
Cristian Calin f6342b6cf4
[crun] upgrade to 1.4.3 (#8598) 2022-03-04 08:22:52 -08:00
Cristian Calin 471585dcd5
[containerd]: upgrade versions to fix CVE-2022-23648 (#8597)
* [containerd] add hashes for 1.6.1

* [contained] make 1.6.1 the default

* [containerd] add hashes for 1.5.10

* [containerd] add hashes for 1.4.13

* [nerdct] bump to 0.17.1
2022-03-03 14:51:16 -08:00
Maciej Wereski 51821a811f
MetalLB: update to v0.12.1 (#8593)
Signed-off-by: Maciej Wereski <m.wereski@partner.samsung.com>
2022-03-03 08:49:48 -08:00
Mathieu Parent 299a9ae7ba
terraform/gcp: Add ingress_whitelist (#8590)
Also, do not create unneeded resources (target pools are charged and should
only be created when needed).
2022-03-02 16:52:46 -08:00
Cristian Calin bf7a506f79
[containerd] Upgrade containerd to 1.6.0 and re-enable arm64 architecture with default options (#8555)
* [containerd] add checksums for 1.6.0

* [containerd] promote 1.6.0 as the new default

* [runc] promote 1.1.0 as the new default to allow arm deployments out of the box

* [nerdctl] bump to 0.17.0 to align with containerd 1.6.0

* [reset] allow crictl stopp and rmp commands to fail
2022-03-02 15:27:13 -08:00
Tom Janson 2e925f82ef
Revert "Fix: typos in docs and comments (#7805)" (#8592)
This reverts commit 417180246c.
2022-03-02 11:57:13 -08:00
Tom Janson ddef7e1139
missing "check_mode: no"s for several read-only tasks (#8584)
this is not complete -- there are almost certainly more instances of
this issue
2022-03-02 09:29:14 -08:00
cyril-corbon 672e47a7eb
feat: check & uninstall container engine (#8439)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-02-28 10:59:46 -08:00
Tom Janson 3e8e64a3e5
fix typo / error regarding etcd and k8s_cluster groups (#8580)
As far as I can tell this is simply a typo that has existed from the beginning. Having it this way around (`etcd` group as a child and thus subset of `k8s_cluster`) mirrors what is written in the preceeding sentence.
2022-02-28 02:54:58 -08:00
Mac Chaffee b554246502
Fix host DNS config 1) being edited too soon and 2) not working with NM (#8575)
Signed-off-by: Mac Chaffee <me@macchaffee.com>
2022-02-26 10:29:23 -08:00
SOPHAL HONG 6d683c98a3
[Terraform-AWS] Replace CLB with NLB (#8578) 2022-02-24 23:53:54 -08:00
Nicolas Goudry ee079f4740
fix(coredns): make sure to keep coredns repository namespace (#8572)
fix: regex

fix: wrong regex_replace usage
2022-02-24 01:01:33 -08:00
Cristian Calin a090038d02
[CI] add ara to collect CI job logs (#8545) 2022-02-23 07:36:19 -08:00
Florian Ruynat 4f1499bd23
Fixup remaining etcd_kubeadm_enabled variables (#8576) 2022-02-23 06:46:18 -08:00
Alex 36393d77d3
Encrypting Secret Data at Rest (#8574)
* change default value for Encrypting Secret Data at Rest to secretbox, remove experimental flag and add documentation

* fix MD012/no-multiple-blanks
2022-02-23 03:04:18 -08:00
Ilya Margolin e053ee4272
Check all places with check_mode: no for side effects (#8573)
and fix the one with side effect.

Also removes `notify` from this task as the task has `changed_when: false`
and notify is not going to fire.
2022-02-23 01:20:18 -08:00
jayonlau 1d46c07307
Cleanup crictl configuration file (#8569) 2022-02-23 00:58:19 -08:00
Ilya Margolin f9b5e448c1
Prevent removing etcd member when running in check mode (#8570) 2022-02-22 23:34:18 -08:00
kakkotetsu 3effb008c9
improve validation conditions for MetalLB BGP Peers (#8568) 2022-02-22 23:12:18 -08:00
cyril-corbon a088f492f4
chore: remove addon-resizer (#8566)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-02-22 09:51:16 -08:00
Necatican Yıldırım e9c8913248
Add kubeadm option to etcd_deployment_type to replace the etcd_kubeadm_enabled variable (#8317)
* Add kubeadm option to etcd_deployment_type to replace the etcd_kubeadm_enabled variable

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* Add etcd kubeadm deployment documentation

Signed-off-by: necatican <necaticanyildirim@gmail.com>

* Refactor warning for the deprecated 'etcd_kubeadm_enabled' variable

Signed-off-by: necatican <necaticanyildirim@gmail.com>
2022-02-22 08:53:16 -08:00
Florian Ruynat b9a27c91da Update kubernetes dashboard to 2.5.0 2022-02-21 03:54:11 -08:00
Florian Ruynat d4f654275b Set default kubernetes version to 1.23.4 2022-02-21 03:54:11 -08:00
Florian Ruynat f6eb4c749d Add kubernetes hashes for 1.23.4/1.22.7/1.21.10 2022-02-21 03:54:11 -08:00
cyril-corbon 418fc00718
fix: kube-dns service deletion (#8565)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-02-21 02:48:11 -08:00
Florian Ruynat 2537177929
Fix amazon docker version (#8564) 2022-02-18 23:50:11 -08:00
Sander Klein 9af719bf99
This fixes the etcd node removal. (#8526)
Since we are already on an etcd node while executing the commands, there 
is no need to find out an etcd IP because it is on localhost.
2022-02-18 07:20:23 -08:00
Vitaliy D 9e020b252e
Configure Etcd container_manager explicitly (#8521)
* Configure Etcd container_manager explicitly

* Add explanation for the Etcd container_manager variable

* Remove redundant space in etcd vars
2022-02-18 00:50:23 -08:00
Kenichi Omichi cc45e365ae
Fix print_hostnames of inventory.py (#8554)
When trying to run print_hostnames of inventory.py, it outputs the following
error:

 $ CONFIG_FILE=./test-hosts.yaml python3 ./inventory.py print_hostnames
 Traceback (most recent call last):
   File "./inventory.py", line 472, in <module>
     sys.exit(main())
   File "./inventory.py", line 467, in main
     KubesprayInventory(argv, CONFIG_FILE)
   File "./inventory.py", line 92, in __init__
     self.parse_command(changed_hosts[0], changed_hosts[1:])
   File "./inventory.py", line 415, in parse_command
     self.print_hostnames()
   File "./inventory.py", line 455, in print_hostnames
     print(' '.join(self.yaml_config['all']['hosts'].keys()))
 KeyError: 'all'

because it is missed to load a hosts config file before printing hostnames.
This fixes the issue.
2022-02-17 13:57:03 -08:00
Mac Chaffee 97c667f67c
Fix etcd_events not getting upgraded in upgrade-cluster.yml (#8550)
Signed-off-by: Mac Chaffee <me@macchaffee.com>
2022-02-17 08:03:38 -08:00
Cristian Calin 063fc525b1
nerdctl: upgrade to 0.16.1 (#8539) 2022-02-16 02:04:37 -08:00
Mac Chaffee 0f73d87509
Allow pausing after upgrade but before uncordon (#8530)
* Allow pausing after upgrade but before uncordon

* Expand docs for upgrade pausing vars

Signed-off-by: Mac Chaffee <me@macchaffee.com>
2022-02-15 16:39:02 -08:00
Cristian Calin 402e85ad6e
[calico] upgrade release checksums (#8544)
* [calico] upgrade 3.19.x to 3.19.4

* [calico] upgrade 3.20.x to 3.20.4

* [calico] upgrade 3.21.x to 3.21.4 and make it the default

* [calico] add 3.22.0 checksums

* [calico] account for path changes in calico 3.21.4 crd archive and above
2022-02-15 16:35:02 -08:00
Tony Fouchard 1d635e04e4
Allow to specify a source address for metallb peerings, and target only some nodes using node selectors (#8534) 2022-02-15 13:57:19 -08:00
kakkotetsu 98d5d0cdd5
add support for Dual Stack node InternalIP (#8542) 2022-02-15 00:28:02 -08:00
Mathieu Parent 31d4a38f09
terraform/gcp: Allow to change extra disk types (#8524) 2022-02-15 00:22:02 -08:00
kakkotetsu 1ebe456f2d
add support for Calico IP6_AUTODETECTION_METHOD (#8541) 2022-02-14 17:26:14 -08:00
Cristian Calin c6e5314fab
implement download mirrors support (#8474)
* [download] add mechanism to support mirrors

* [calico] support alternate download url
2022-02-14 13:19:32 -08:00
SOPHAL HONG a6a79883b7
Fix: Error when creating subnets more than AZ (#8516) 2022-02-14 13:12:30 -08:00
Takuya Murakami b02e68222f
feat(offline): Improve generate_list.sh to generate offline file list using ansible (#8537) (#8538)
Use jinja2 template and ansible to expand variables.
2022-02-13 23:19:28 -08:00
Takuya Murakami da8522af64
docs: Update offline-environment.md for containerd (#8520) (#8523)
* Add containerd/runc/nerdctl download url
* Add insecure registries configuration for containerd
2022-02-09 08:08:18 -08:00
Tom Stian Berget 84b93090a8
Change Cilium setting identity_allocation_mode to cilium_identity_allocation_mode (#8519)
* Change Cilium identity_allocation_mode to cilium_identity_allocation_mode

* Change inventory sample
2022-02-08 14:04:35 -08:00
Byeonggon Lee 5695c892d0
Fix wrong port name in metallb.yml.j2 (#8510) 2022-02-07 09:43:45 -08:00
DenisKa 696101a910
Fixed mitogen.yml (#8508)
Fixed the problem when call ansible-playbook contrib/mitogen/mitogen.yml
"The error was: 'dict object' has no attribute 'section'"

What type of PR is this?

/kind bug

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?:
2022-02-07 01:39:43 -08:00
Sander Klein 54dfe73d24
Add bastion support to remove-node.yml (#8504)
Somehow bastion support for remove-node.yml was missing.

This commit adds it.
2022-02-04 23:50:50 -08:00
Krystian Młynek 87928baa31
CRI-O: fix unqualified-search registries (#8496) 2022-02-04 23:46:50 -08:00
mgiessing 6a4fd33a03
Added ppc64le support (#8505)
* Added ppc64le support

* Fixed linting errors
2022-02-04 00:14:00 -08:00
cyril-corbon 790448f48b
feat: update cert-manager to 1.7.0 (#8491)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-02-03 17:24:00 -08:00
Cristian Calin 7759494c85
[terraform][openstack] allow disabling port_security at port level (#8455)
Use openstack_networking_port_v2 and openstack_networking_floatingip_associate_v2
to attach floating ips. This gives us more flexibility on disabling port security
when binding instances directly on provider networks in private cloud scenario.
2022-02-02 08:50:22 -08:00
Ilya Margolin aed187e56c
Fix kubelet_kubelet_cgroups_cgroupfs (#8500)
If kubelet is run with systemd (as it always is when using kubespray),
it starts in systemd's /system.slice/kubelet.service cgroup.

This commit prevents a creation and usage of a second unrelated cgroup.
2022-02-02 00:50:22 -08:00
Julio H Morimoto eac799f589
Amend documentation for docker to containerd migration (#8477)
* Amend PR https://github.com/kubernetes-sigs/kubespray/pull/8471 with missing inventory configuration.

Signed-off-by: Julio Morimoto <julio@morimoto.net.br>

* Amend PR https://github.com/kubernetes-sigs/kubespray/pull/8471 with missing inventory configuration.

Signed-off-by: Julio Morimoto <julio@morimoto.net.br>
2022-02-02 00:46:22 -08:00
Cristian Calin 5ecb07b59a
[nerdctl] upgrade to 0.16.0 (#8484)
* [nerdctl] upgrade nerdctl to 0.16.0

* [nerdctl] add configuration file
2022-02-01 15:11:48 -08:00
Cristian Calin ff621fb7f1
[ingress-nginx] upgrade to 1.1.1 (#8490) 2022-02-01 09:50:11 -08:00
Mathieu Parent 958bca8800
terraform/gcp: Do not create unused subnetworks and Upgrade to latest google provider (#8497)
* terraform/gcp: Do not create unused subnetworks

By default terraform creates a subnetwork in each 39 regions

* terraform/gcp: Upgrade to latest google provider

... where "one of source_tags, source_ranges, or source_service_accounts must be defined"
2022-02-01 09:14:11 -08:00
Michael Schmitz eacd55fbca
Use sysctl_file_path variable for all sysctl_file locations (#8395)
* Use sysctl_file_path variable for all sysctl_file locations

* Add sysctl_file_path variable to kubespay-defaults

* Remove previously used sysctl file locations if present

* Use explicit filename in roles/kubernetes/node/defaults/main.yml

* Defaults: use explicit value
2022-02-01 08:12:10 -08:00
Cristian Calin 0e2ab5c273
[misc] add cristicalin to approvers list (#8494) 2022-02-01 08:08:11 -08:00
Cristian Calin c47634290e
[helm] upgrade to 3.8.0 (#8489) 2022-02-01 06:34:12 -08:00
Tristan 92d612c3e0
8487: Allow override of default CoreDNS zone cache (#8488)
Using the coredns_cluster_zone_cache_block variable
2022-02-01 00:48:18 -08:00
Ilya Margolin 2bbe5732b7
Add node label to etcd metrics (#8475)
targetRef on endpoints surfaces as
__meta_kubernetes_endpoint_address_target_kind/__meta_kubernetes_endpoint_address_target_name
in prometheus and gets converted to the label `node` by
prometheus-operator
2022-01-31 06:08:23 -08:00
Samuel Liu e6e7fbc25f
fix reset containerd_storage_dir undefined (#8478)
* fix reset containerd_storage_dir

* add env to kubespray-defaults
2022-01-31 05:46:23 -08:00
Ilya Margolin 7d4d554436
Document host_resolvconf as default value for resolvconf_mode (#8493)
refs #8247
2022-01-31 03:12:24 -08:00
cyril-corbon d31db847b7
feat: update local path to v0.0.21 (#8492) 2022-01-31 01:08:24 -08:00
Mathieu Parent 3562d3378b
terraform/gcp: Allow to use preemptible VM instances (#8480) 2022-01-31 00:30:24 -08:00
Calin Cristian Andrei ababcd5481 [kube] make 1.23.3 the new default 2022-01-31 00:22:24 -08:00
Calin Cristian Andrei 7caffde0b6 [kube] add 1.23.3 hashes 2022-01-31 00:22:24 -08:00
Cristian Calin c40b43de01
[mitogent] update to 0.3.2 (#8470) 2022-01-27 08:36:59 -08:00
Julio H Morimoto b0eb5650da
Provide initial guidelines for a container engine migration (docker-2-containerd), with special emphasis on the fact that the procedure is still not officially supported. (#8471)
Follow up from https://github.com/kubernetes-sigs/kubespray/issues/8431.

Signed-off-by: Julio Morimoto <julio@morimoto.net.br>
2022-01-27 01:40:10 -08:00
华忠啊 52f221f976
Adaptive Kube-ovn (#8454) 2022-01-27 01:08:10 -08:00
Cristian Calin 26a5948d2a
[reset] remove containerd storage during reset (#8469) 2022-01-26 05:10:01 -08:00
ceesios d86a3b962c
Proposing fixes for contrib/terraform/vsphere/ #8436 (#8441)
* fixes issues in vSphere Terraform contrib. #8436

* fix formatting

* add variables to the main module and document changes

* add missing newline
2022-01-25 05:24:30 -08:00
Mathieu Parent d64b341b38
Update terraform GCP to Ubuntu 20.04 (latest LTS) (#8463)
* Fix terraform Warning

Version constraints inside provider configuration blocks are deprecated

Terraform 0.13 and earlier allowed provider version constraints inside the
provider configuration block, but that is now deprecated and will be removed
in a future version of Terraform. To silence this warning, move the provider
version constraint into the required_providers block.

* Fix terraform Warning: Quoted references are deprecated

* terraform: Update GCP Ubuntu to latest LTS
2022-01-25 01:22:30 -08:00
Florian Ruynat d580014c66
Fix CI for Fedora (followup) + OpenSUSE Leap (update to 15.3) (#8407)
* Fix fedora jobs - followup

* Update OpenSUSE Leap to 15.3

* Fix cilium version in README + update minor 1.11.1
2022-01-24 23:24:30 -08:00
Calin Cristian Andrei be9a1f80c1 [kube] make 1.23.2 the default version 2022-01-24 11:59:33 -08:00
Calin Cristian Andrei 73ff3b0d3b [kubernetes] add hashes for 1.23.2, 1.22.6 and 1.21.9 2022-01-24 11:59:33 -08:00
cyril-corbon 9fce9ca42a
feat: upgrade azuredisk csi to v1.10.0 (#8432)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-01-24 00:41:56 -08:00
Cristian Calin f1adb734e3
[cri-tools] add hashes for 1.23.0 (#8442) 2022-01-24 00:21:56 -08:00
cyril-corbon 575e0ca457
feat: add eviction hard to kubelet config (#8421)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-01-24 00:13:57 -08:00
Alex 69f088bb82
add hash-values for runc v1.1.0 - first upstream runc version for multi-arch (#8447) 2022-01-23 23:51:57 -08:00
Cristian Calin ef34f5fe7d
[calico] switch default iptables backend detection to Auto (#8429) 2022-01-23 23:47:57 -08:00
Victor Morales e88aa7c96b
Add youki runtime support (#8411) 2022-01-21 14:01:07 -08:00
Johann Schley 38d129a0b6
add external hcloud cloud controller manager (#8440) 2022-01-20 12:31:09 -08:00
onock 392815d97c
[cert-manager] Fix missing RBAC rules for ClusterRole cert-manager-cainjector kubernetes-sigs#8104. (#8444) 2022-01-20 12:17:09 -08:00
Pav K 6e2e61012a
Docs - Removed incorrect info on calico_rr. (#8437) 2022-01-17 02:55:30 -08:00
rtsp e791089466
cert-manager: Fix incorrect leader election namespace lead to insufficient permission (#8433) 2022-01-17 02:37:29 -08:00
Cristian Calin 418f12f62a
[calico] drop 3.18.x and make 3.21.x the new default (#8426) 2022-01-17 02:29:29 -08:00
Necatican Yıldırım caff539ccd
Add identity_allocation_mode support for Cilium (#8430)
Co-authored-by: Emin Aktaş <eminaktas34@gmail.com>
Co-authored-by: Yasin Taha Erol <yasintahaerol@gmail.com>
Signed-off-by: necatican <necaticanyildirim@gmail.com>

Co-authored-by: Emin Aktaş <eminaktas34@gmail.com>
Co-authored-by: Yasin Taha Erol <yasintahaerol@gmail.com>
2022-01-16 09:29:28 -08:00
Kenichi Omichi c0d1bb1a5c
Remove subnet from router on tf-elastx_cleanup (#8425)
The tf-elastx_cleanup test job was failed with error message:

Port xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx cannot be deleted
directly via the port API: has device owner network:router_interface.

That means necessary to remove a subnet from the router before
deleting the port.
This adds a method to removes a subnet from the router automatically.
2022-01-15 00:50:15 -08:00
Cristian Calin ea44d64511
[contrib] terraform openstack: allow disabling port security (#8410) 2022-01-14 12:58:32 -08:00
Samuel Liu 1a69f8c3ad
parameterized snaphot controller namespaces (#8305)
* Parameterized snaphot controller namespaces

* add ns yml

* add docs

* namespace
2022-01-14 12:58:26 -08:00
rtsp ccd3180a69
cert-manager: Allow to change leader election namespace for GKE Autopilot support (#8424)
More information:

- kubernetes-sigs/kubespray#8393
- jetstack/cert-manager#4102
- jetstack/cert-manager#3717
2022-01-14 12:54:26 -08:00
cyril-corbon 01dcbc18ac
feat: upgrade metallb to v0.11.0 (#8420)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-01-14 05:22:28 -08:00
Florian Ruynat 7c67ec4976
Fix kubectl call before installing it (#8412) 2022-01-12 23:12:29 -08:00
Mathieu Parent 43d128362f
Document image_command_tool and image_command_tool_on_localhost (#8409)
Signed-off-by: Mathieu Parent <mathieu.parent@insee.fr>
2022-01-11 15:35:24 -08:00
Cristian Calin 1337c9c244
[csi-snapshotter] upgrade to 5.0 (#8403) 2022-01-11 09:14:33 -08:00
cyril-corbon 86953b2ac4
fix: add tolerations / affinity to cert-manager (#8389)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-01-11 09:14:26 -08:00
moss2k13 135c9b29a7
contrib: add cloud-init support for terraform vms (#8394)
* contrib: add cloud-init support for terraform vms

This change enables instance customization via cloud-init,
for example: additional CA certs, custom SSH access etc.

* contrib: update docs for terraform cloud-init

* contrib: disable yamllint in cloud-init

require-starting-space rule breaks cloud-init header

* contrib: documenation formatting

* yamllint: disable comments related checks

* docs: markdown formatting
2022-01-11 05:23:16 -08:00
Tovin Seven e0d67367ed
Update installation doc with vagrant (#8406) 2022-01-11 05:19:17 -08:00
Florian Ruynat d007132655
Fix Fedora CI following ipset version in kube-proxy for k8s 1.23 (#8397) 2022-01-11 05:01:17 -08:00
Mathieu Parent cfd9873bbc
Allow to choose container manager commands (#8380)
This allow to workaround #8375 by using image_command_tool=crictl
when containerd_registries is used for containerd.

Also changes image_info_command_on_localhost for docker to return digests.
2022-01-11 01:13:16 -08:00
Samuel Liu b2b95cc8f9
fix 0090-etchosts (#7634) 2022-01-11 01:03:16 -08:00
Kenichi Omichi 73c889eb10
Fix failures of ansible-lint (#8401)
This fixes the following types of failures:
- empty-string-compare
- literal-compare
- risky-file-permissions
- risky-shell-pipe
- var-spacing

In addition, this changes .gitlab-ci/lint.yml to block the same issue
by using the same method at Kubespray CI.
2022-01-11 00:45:16 -08:00
Victor Morales 642725efe7
Bump containerd version to 1.5.9 (#8402) 2022-01-11 00:05:16 -08:00
Cristian Calin 29aafff2ce
etcd: add 3.5.1 for kubernetes 1.23+ (#8320) 2022-01-10 22:45:15 -08:00
forselli-stratio df425ac143
Fix etcd certificates reference to support etcd_kubeadm_enabled:true (#7766)
* Fix etcd certificates reference to support etcd_kubeadm_enabled:true

* Add retries to ETCD Join Member task

* Fix etcd certificates reference when etcd_kubeadm_enabled:true

* Fix conflicts
2022-01-10 15:24:25 -08:00
Unai Arríen 57a1d18db3
Improve first_kube_control_plane variable management to avoid installation failures due to variable overlapping (#8388) 2022-01-10 01:35:19 -08:00
rtsp aa4a3d7afd
Fix container engine still installed on dedicated etcd node even if etcd_deployment_type: host (#8386) 2022-01-10 01:35:12 -08:00
Alex 06ad5525b8
replace runc 1.0.3 arm64 hash with 0 (#8391) 2022-01-10 01:31:13 -08:00
Kenichi Omichi f80fd24a55
Fix risky-file-permissions (#8370)
When running ansible-lint directly, we can see a lot of warning
message like

  risky-file-permissions File permissions unset or incorrect

This fixes the warning messages.
2022-01-09 01:51:12 -08:00
Kenichi Omichi 51bd9bee0d
Move containerd_version to defaults/main.yml (#8379)
All container image versions were defined in download/defaults/main.yml
except containerd.
The inconsistency caused the offline script(generate_list.sh) could not
output the URL of containerd image.
This moves the definition into a valid file.
In addition, this adds host_os to generate_list.sh for downloading
krew from a valid URL.
2022-01-09 01:47:12 -08:00
Victor Morales 52266406f8
Bump cert-manager version to v1.6.1 (#8377) 2022-01-07 16:45:34 -08:00
cyril-corbon cd601c77c7
feat: upgrade metrics server to v0.5.2 (#8338)
Signed-off-by: Cyril Corbon <corboncyril@gmail.com>
2022-01-07 08:18:33 -08:00
Florian Ruynat 6abae713f7
Update helm / kube-router and coredns (#8382)
* Update kube-router to 1.4.0

* Update Helm to 3.7.2

* Up coredns to 1.8.6 when k8s is 1.23.x
2022-01-06 12:14:27 -08:00
Alex 1312f92a8d
adding 0 checksum for kata_containers_version on arm(64) (#8383) 2022-01-06 12:08:27 -08:00
Unai Arríen 92abf26d29
Ensure taint configuration for secondary control-plane nodes (#8363) 2022-01-05 23:56:28 -08:00
Mathieu Parent c11e4ba9a7
Add missing example offline nerdctl_download_url (#8373) 2022-01-05 10:23:48 -08:00
Mathieu Parent 7ae00947f5
Avoid yanked ruamel.yaml.clib version (#8372)
See https://pypi.org/project/ruamel.yaml.clib/#history

Signed-off-by: Mathieu Parent <math.parent@gmail.com>
2022-01-05 08:06:41 -08:00
Bart Sloeserwij 59f62473c9
Update configuration of registries in cri-o (#7852)
* Update configuration of registries in cri-o

* Update docs to match new registry configuration
2022-01-05 07:36:40 -08:00
Unai Arríen 8fbd08d027
Fix DNS configuration when using resolvconf_mode='host_resolvconf' during scale (#23) (#8361) 2022-01-05 03:06:33 -08:00
Choi Yongbeom dda557ed23
Update config.toml.j2 (#8340)
* Update config.toml.j2

i think this commit code is not completed works

exam registry address : a.com:5000

insecure registry must be http://a.com:5000

but this code add insecure a.com:5000 (without http://)

If there is no http, containerd accesses with https even if insecure_skip_verify = true

solution is code edit

* Update config.toml.j2

* Update containerd.yml

* Update containerd.yml

* Update containerd.yml

* Update config.toml.j2
2022-01-05 02:56:33 -08:00
Max Gautier cb54eb40ce
Use a variable for standardizing kubectl invocation (#8329)
* Add kubectl variable

* Replace kubectl usage by kubectl variable in roles

* Remove redundant --kubeconfig on kubectl usage

* Replace unecessary shell usage with command
2022-01-05 02:26:32 -08:00
Cristian Calin 3eab1129b9
CI: Replace CentOS 8 with AlmaLinux 8 before CentOS 8 EOL end of 2021 (#8297) 2022-01-05 02:20:33 -08:00
Choi Yongbeom 24f1402a14
nerdctl insecure registry config (#8339)
* Update prep_download.yml

nerdctl insecure registry config

* Update prep_download.yml

* Update prep_download.yml

apply conversations advice

* Update prep_download.yml

* Update prep_download.yml

* Update prep_download.yml

* Update prep_download.yml

* Update prep_download.yml

* Update prep_download.yml

* Update main.yml

* Update main.yml

* Update prep_download.yml

* Update prep_download.yml
2022-01-05 01:14:33 -08:00
Necatican Yıldırım bf00550388
Upgrade Cilium to 1.11.0 (#8354)
* Remove kvstore args from Cilium DaemonSet

Co-authored-by: Emin Aktaş <eminaktas34@gmail.com>
Co-authored-by: Yasin Taha Erol <yasintahaerol@gmail.com>
Signed-off-by: necatican <necaticanyildirim@gmail.com>

* Bump Cilium to 1.11.0

Co-authored-by: Emin Aktaş <eminaktas34@gmail.com>
Co-authored-by: Yasin Taha Erol <yasintahaerol@gmail.com>
Signed-off-by: necatican <necaticanyildirim@gmail.com>

Co-authored-by: Emin Aktaş <eminaktas34@gmail.com>
Co-authored-by: Yasin Taha Erol <yasintahaerol@gmail.com>
2022-01-05 00:36:32 -08:00
Kenichi Omichi 78c83a8f26
Update containerd doc (#8369)
This is a follow-up change for https://github.com/kubernetes-sigs/kubespray/pull/7911
2022-01-05 00:32:33 -08:00
Nguyễn Trung e72f8e0412
Update node about container_manager variable (#7911)
I was deploy my cluster with separate etcd cluster and not intersect with kube_control_plane or kube_node. And I want to run etcd cluster in docker but still used containerd to make container runtime for all other nodes. Therefore, I was added note to this doc for everyone 

Thank !
2022-01-04 14:29:20 -08:00
Florian Ruynat 6136fa7c49 Update Kubernetes version to 1.23.1 2022-01-04 10:25:00 -08:00
Florian Ruynat 8d2b4ed4a9 Move min k8s version to 1.21 2022-01-04 10:25:00 -08:00
Florian Ruynat 9e9b177674 Update kubespray_version following release 2022-01-04 10:25:00 -08:00
Cristian Calin 4c4c83f0a1
crun update to 1.4 (#8330)
* [crun] update crun to 1.4

* [crun] drop pre-1.x versions
2022-01-04 08:30:53 -08:00
Unai Arríen 0e98814732
Configure PriorityClassName for MetalLB deployment (#8362) 2022-01-04 08:20:52 -08:00
Max Gautier 92f25bf267
Simplify usage of pre-remove role (#8334)
- Use builtin task scheduling of ansible (same task on each host)
  instead of manual looping on master

Benefits:
- One less play in remove-node.yml playbook
- Parralel node drain
- Drain parameters (timeout, grace period, retries,
  allow_ungraceful_removal) can be adjusted separately for each node
  with ansible variables
2022-01-04 07:10:53 -08:00
Romain ALBON 63a53c79d0
Fix - Search root filesystem device (#8366) 2022-01-04 06:48:52 -08:00
Florian Ruynat 2f9a8c04dc
Add nginx_image_repo to mirrored image on quay (#8364) 2022-01-03 10:03:00 -08:00
Choi Yongbeom 8c67f42689
Update offline.yml (#8358)
[cni-plugins] upgrade to stable 1.0.1 (#8331) using flannel cni add flannel_cni_download_url

flannel_cni_download_url offline doc update
2022-01-03 09:58:59 -08:00
Florian Ruynat 783a51e9ac
Fix README version for cni/flannel (#8359) 2022-01-03 03:42:59 -08:00
Florian Ruynat 841c61aaa1
Revert "Fix external lb error (#8299)" (#8360)
This reverts commit 4f2e4524b8.
2022-01-03 01:37:00 -08:00
Samuel Liu 157942a462
fix resolved config (#8351) 2022-01-03 00:06:59 -08:00
jbpratt e88a27790c
fix spelling error (#8342) 2022-01-02 23:55:00 -08:00
Cristian Calin ed3932b7d5
[cni-plugins] upgrade to stable 1.0.1 (#8331)
* [cni-plugins] upgrade to stable 1.0.1

* [flannel] use binary from dedicated project
2021-12-23 23:16:15 -08:00
emiran-orange 2b5c185826
calico_pool_blocksize must be cast as well in assertion when defined (#8321)
* calico_pool_blocksize must be cast as string in assertion when defined

* Cast as int rather than string
2021-12-23 00:58:37 -08:00
zemkogabor 996ecca78b
Glusterfs daemonset readiness and liveness params. #8307 (#8309) 2021-12-23 00:32:37 -08:00
zhengtianbao c3c128352f
Remove registry-proxy (#8327) 2021-12-21 23:55:35 -08:00
zhengtianbao 02a89543d6
registry: add ingress support (#8311) 2021-12-21 10:20:46 -08:00
Cristian Calin c1954ff918
Support deploying kubernetes 1.23 (#8323)
* Ensure entries for 1.23 are added for supported_versions vars

* cri-o: add support for kubernetes 1.23 but still use cri-o 1.22

* kubescheduler-config: diferentiate config versions based on kube_version
2021-12-21 01:38:46 -08:00
Kenichi Omichi b49ae8c21d
Delete "kubeadm alpha certs" code (#8322)
"kubeadm alpha certs" command has been promoted to "kubeadm certs" command,
and "kubeadm alpha certs" has been deprecated since Kubernetes v1.20 as [1].
In addition, Kubespray supports Kubernetes v1.20+.
This delete the deprecated command for cleanup.

[1]: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation
2021-12-20 12:53:33 -08:00
Calin Cristian Andrei 1a7b4435f3 Bump default version of kubernetes to 1.22.5 2021-12-20 08:56:56 -08:00
Calin Cristian Andrei ff5ca5f7f8 add temp location to .gitignore 2021-12-20 08:56:56 -08:00
Calin Cristian Andrei db0e458217 Kubernetes: add hashes for v1.23.1, v1.23.0, v1.22.5, v1.21.8 and v1.20.14 2021-12-20 08:56:56 -08:00
Nicolas MASSE f01f7c54aa
Add support for CRI-O user namespaces (#8268)
* add support for cri-o user namespaces

* comply with yamllint rules
2021-12-20 06:37:25 -08:00
kakkotetsu c59407f105
add support for Calico BGPPeer sourceAddress (#8306) 2021-12-20 01:51:25 -08:00
Cristian Calin fdc5d7458f
Upgrade to nerdctl 0.15.0 and some fixes (#8315)
* nerdctl: move to 0.15.0

* nerdctl: reduce verbosity when pulling images

* download: use proxy environment when using nerdctl to download containers
2021-12-20 00:33:26 -08:00
Antoine Gatineau 6aafb9b2d4
fix bad indentation (#8314) 2021-12-17 07:36:29 -08:00
Samuel Liu aa9ad1ed60
clean files for kube-ovn (#8310) 2021-12-15 23:39:19 -08:00
zhengtianbao aa9b8453a0
registry: service add clusterIP, nodePort, loadBalancer support (#8291)
* registry: service add clusterIP, nodePort, loadBalancer support

* modify camelcase name to underscore

* Add registry service type compatibility check
2021-12-15 00:18:19 -08:00
Cristian Calin 4daa824b3c
CI: fix test name debian10-aio was a 2 instance default (#8286)
* CI: fix test name debian10-aio was a 2 instance default

* CI: Fix running ubuntu20-aio-docker

* CI: Fix running ubuntu18-aio-docker
2021-12-13 14:50:25 -08:00
singeleaf 4f2e4524b8
Fix external lb error (#8299) 2021-12-13 14:46:27 -08:00
Xudong Zhang 8ac510e4d6
sample containerd: containerd_runtimes is removed (#8301)
(#8213) split containerd_runtimes to containerd_runc_runtime and
containerd_additional_runtimes
2021-12-13 14:42:25 -08:00
Marat Talipov 4f27c763af
containerd insecure registry support (#8298) 2021-12-13 00:41:58 -08:00
Cristian Calin 0e969c0b72
vSphere-CSI: update to 2.4.0 (#8295) 2021-12-10 11:07:23 -08:00
Steven Reitsma b396801e28
Update Cinder CSI to v1.22 (#8296) 2021-12-10 10:49:11 -08:00
Cristian Calin 682c8a59c2
containerd: change default resolvconf_mode to host_resolvconf (#8247)
* containerd: change default resolvconf_mode to host_resolvconf

* Wait for kube-apiserver to come back after pod refresh

* Handle resolv.conf gracefully

* Retain currently configured DNS entries to ensure we don't break the resolvers

* Suse uses wickedd for network management so no dhcp hooks

* Molecule: increase ansible timeout

* CI: Increase ansible timeout to 120s for Packet jobs
2021-12-09 14:09:06 -08:00
Florian Ruynat 5a25de37ef
Revert "remove no longer present etcd nodes from APIEndpoints list in kubeadm-config configmap (#8244)" (#8287)
This reverts commit dc767c14b9.
2021-12-09 08:24:16 -08:00
Kenichi Omichi bdb923df4a
Add oomichi to approvers (#8284)
For taking more responsibility on Kubespray project, this adds
oomichi to the list of approvers.
2021-12-09 00:40:10 -08:00
zhengtianbao 4ef2cf4c28
Registry add TLS and authentication support (#8229)
* Add registry TLS support

* Add registry configmap and htpasswd auth
2021-12-07 08:32:00 -08:00
Cristian Calin 990ca38d21
Kata-Containers: add 2.3.0 (#8276)
* Kata-Containers: add checksums for 2.3.0

* Kata-Containers: version 2.3.0 requires kubernetes 1.22.0+
2021-12-07 08:18:08 -08:00
Cristian Calin c7e430573f
Calico: upgrade 3.21.x to 3.21.2 (#8275) 2021-12-07 08:18:01 -08:00
Cristian Calin a328b64464
runc: upgrade to v1.0.3 (#8274) 2021-12-07 06:10:02 -08:00
zhengtianbao a16d427536
Set etcd-events listen port to 2383 (#8232) 2021-12-07 00:28:01 -08:00
Cristian Calin c98a07825b
Use cgroupsv2 where available (fedora) (#8237)
* Containerd: use cgroupsv2 where available (fedora)

* Docker: use cgroupsv2 where available (fedora)

* cri-o: use cgroupsv2 where available (fedora)
2021-12-06 11:19:33 -08:00
Samuel Liu a98ca6fcf3
Update loadbalancers versions (#8272)
* Update loadbalancers versions

* fix haproxy_config_dir mode
2021-12-06 09:40:32 -08:00
Samuel Liu 4550f8c50f
calico_flexvol (#8273) 2021-12-06 05:00:32 -08:00
toplordsaito 9afca43807
change dns upstream condition for coredns (#8263)
upstream_dns_servers should change corefile config even resolvconf_mode=docker_dns
2021-12-06 02:46:32 -08:00
Alvaro Campesino 27ab364df5
Improve control plane scale flow (#13) (#7989)
* Improve control plane scale flow (#13)

* Added version 1.20.10 of K8s

* Setting first_kube_control_plane to a existing one

* Setting first_kube_control_plane to a existing one

* change first_kube_master for first_kube_control_plane

* Ansible-lint changes
2021-12-06 00:16:32 -08:00
Hanna Bledai 615216f397
Fix if bind-address is not set to 0.0.0.0 (#8262)
* if bind-address is not set to 0.0.0.0

* Update docs and left comments

* fix yamllist check: remove space
2021-12-05 23:58:32 -08:00
Kenichi Omichi 46b1b7ab34
Fix k8scsi/csi-resizer repo (#8270)
If trying to pull k8scsi/csi-resizer image from gcr.io, we face the error
like:

 $ docker pull gcr.io/k8scsi/csi-resizer:v1.0.0
 Error response from daemon: Head https://gcr.io/v2/k8scsi/csi-resizer/
 manifests/v1.0.0: unknown: Project 'project:k8scsi' not found or deleted.
 $

We can pull the image from quay.io instead.
This fixes the issue.
2021-12-05 23:42:32 -08:00
Alvaro Campesino 30d9882851
Add nodelocaldns only if it is enabled (#7731) 2021-12-03 20:36:31 -08:00
Cristian Calin dfdebda0b6
Calico: remove duplicate values for CALICO_DISABLE_FILE_LOGGING and FELIX_DEFAULTENDPOINTTOHOSTACTION (#8269) 2021-12-03 20:32:31 -08:00
Cristian Calin 9d8a83314b
containerd: add hashes for 1.5.8 and 1.4.12 and make 1.5.8 the new default (#8239)
* containerd: add hashes for 1.5.8 and 1.4.12 and make 1.5.8 the new default

* containerd: make nerdctl mandatory for container_manager = containerd

* nerdctl: bump to version 0.14.0

* containerd: use nerdctl for image manipulation

* OpenSuSE: install basic nerdctl dependencies
2021-12-03 12:20:35 -08:00
Florian Ruynat e19ce27352
Remove ovn4nfv support (#8265) 2021-12-03 11:56:35 -08:00
Cristian Calin 4d711691d0
Fix calico crd archive checksums (#8266)
v3.20.3 and v3.21.1 were re-released with new checksums
2021-12-03 04:56:27 -08:00
Samuel Liu ee0f1e9d58
Update etcd-servers for apiserver (#8253) 2021-12-03 00:28:27 -08:00
Cristian Calin a24162f596
CI: upgrade vagrant to 2.2.19 (#8264) 2021-12-02 13:23:44 -08:00
Florian Ruynat e82443241b
Move opensuse CI to docker and fix ubuntu16 containerd version for docker (#8257) 2021-12-02 08:01:34 -08:00
Cristian Calin 9f052702e5
containerd: add support for suse distributions (#8261) 2021-12-02 07:51:33 -08:00
Florian Ruynat b38382a68f
Move cri-o default package to 1.22 (#8258) 2021-12-02 06:21:34 -08:00
zhengtianbao 785324827c
Set ingress-nginx default terminationGracePeriodSeconds to 5 min (#8252)
* set ingress-nginx default terminationGracePeriodSeconds to 5 min for the drain of connection

* Add ingress_nginx_termination_grace_period_seconds at sample inventory
2021-12-02 03:23:33 -08:00
Cristian Calin 31c7b6747b
Calico: add dependencies for 3.21.x (#8250) 2021-12-02 01:17:33 -08:00
Alvaro Campesino dc767c14b9
remove no longer present etcd nodes from APIEndpoints list in kubeadm-config configmap (#8244) 2021-12-01 07:17:15 -08:00
Florian Ruynat 30ec03259d
Remove fedora33 - eol (#8246) 2021-11-30 15:53:17 -08:00
Robin Wallace 38c12288f1
Add option for boot volume type for k8s node (#8256) 2021-11-30 12:59:01 -08:00
Florian Ruynat 0e22a90579
Update docker to 20.10.11 with containerd 1.4.12 (#8255) 2021-11-30 11:49:01 -08:00
Samuel Liu 0cdf75d41a
add macOS .DS_Store to ignore (#8251) 2021-11-30 01:10:56 -08:00
mircyb 3c6fa6e583
offline install using containerd runtime (#8254)
install containerd on centos need to binary download it 

but offline.yml has no that value

binary download url default in

roles/download/defaults/main.yml:runc_download_url: "https://github.com/opencontainers/runc/releases/download/{{ runc_version }}/runc.{{ image_arch }}"
roles/download/defaults/main.yml:containerd_download_url: "https://github.com/containerd/containerd/releases/download/v{{ containerd_version }}/containerd-{{ containerd_version }}-linux-{{ image_arch }}.tar.gz"

if i use default offlie.yml, it's error from task download files

because runc,containerd down url is none offline

i want fix this 

just add 2 new line
2021-11-30 01:06:56 -08:00
Cristian Calin ee882fa462
Add capability to use swap, requires Kube 1.22 (#8241)
* Alpha-NodeSwap: allow nodes to use swap

* CI: Add Fedora 35 with experimental swap job
2021-11-30 00:52:56 -08:00
Cristian Calin 3431ed9857
containerd: properly pull images with containerd specific tools (#8245) 2021-11-30 00:48:56 -08:00
Florian Ruynat 279808b44e Update minor version for kata/cilium/kube-router/helm 2021-11-29 23:06:56 -08:00
Florian Ruynat 2fd529a993 Update Kubernetes version to v1.22.4 2021-11-29 23:06:56 -08:00
Florian Ruynat 1f6f79c91e Update kubernetes hashes with 1.22.4/1.21.7/1.20.13 2021-11-29 23:06:56 -08:00
Cristian Calin 52ee5d0fff
Various documentation updates (#8243)
* Docs: update CONTRIBUTING.md

* Docs: clean up outdated roadmap and point to github issues instead

* Docs: update note on kubelet_cgroup_driver

* Docs: update kata containers docs with note about cgroup driver

* Docs: note about CI specific overrides
2021-11-29 15:05:21 -08:00
Cristian Calin 2f44b40d68
OEL7: Fix CentOS7 Extras for OEL7 (#8219)
* OEL7: Fix CentOS7 Extras for OEL7

* Molecule: add logs collection for jobs
2021-11-29 13:39:21 -08:00
Cristian Calin 20157254c3
Update calico versions (#8238)
* Calico: Bump 3.20.x to 3.20.3

* Calico: Bump 3.18.x to 3.18.6

* Calico: add calico 3.21.1 hashes
2021-11-29 01:15:22 -08:00
IKRozhkov 09c17ba581
add Gather facts to remove-node.yml (#8231) 2021-11-29 01:01:22 -08:00
Florian Ruynat a5f88e14d0
Cleanup tests (#8234)
* Add Fedora 35 image, support and CI

* Cleanup tests and allow_failure for vagrant
2021-11-26 09:00:51 -08:00
Cristian Calin e78bda65fe
Defaults: replace docker with containerd as our default container_manager (#8175)
* Defaults: replace docker with containerd as our default container_manager

* CI: Use docker for download_localhost test

* Defaults: with container_manager=containerd we need etcd_deployment_type=host

* CI: Run weave jobs with docker

* CI: Vagrant don't download_force_cache

* CI: Fix upgrade tests

* should run compatible with old settings, this means docker
* we need to run with a distro that has at least modern containerd,
  this means move from debian9 to debian10 to allow `containerd_version`
  to match between 2.17 and master
2021-11-25 06:54:33 -08:00
khatrig 3ea496013f
Create reset.yml (#8227) 2021-11-24 09:44:20 -08:00
ishizuka 7e1873d927
DeprecationWarning occurs when indentfirst=None is specified in coredns-config.yml.j2 (#8224) 2021-11-24 08:56:21 -08:00
Olle Larsson fe0810aff9
Add option to set different server group policy for etcd, node, and master server (#8046) 2021-11-22 02:53:09 -08:00
zhengtianbao e35a87e3eb
Update registry template (#8198)
* Add registry replica setting

* Add registry liveness and readiness probe

* Set the security context for registry

* Add registry pvc access mode option

* registry add replica requirement check

* docs: add registry replicas setting note

* Update docs/kubernetes-apps/registry.md

Co-authored-by: Cristian Calin <6627509+cristicalin@users.noreply.github.com>

Co-authored-by: Cristian Calin <6627509+cristicalin@users.noreply.github.com>
2021-11-22 02:45:09 -08:00
zhengtianbao a6fcf2e066
Enable experimental modules when rpm-ostree version >= 2021.9 (#8202)
* Enable experimental modules when rpm-ostree version >= 2021.9

* cleanup code
2021-11-22 02:29:09 -08:00
Karthikeya Viswanath 25316825b1
docs: remove basic auth reference in getting-started (#7823) 2021-11-19 14:49:23 -08:00
Cristian Calin c74e1c9db3
CI: use images from quay.io to prevent being throttled by docker hub (#8209)
* CI: use netchecker images from quay to prevent throttling

* Molecule: use hello-world image from quay.io
2021-11-19 13:23:40 -08:00
Florian Ruynat be9de6b9d9
Fix debian 9 check for apt cache update (#8215) 2021-11-19 09:02:51 -08:00
Pasquale Toscano fe8c843cc8
Fix typo in Containerd configuration (#8206) 2021-11-19 08:40:53 -08:00
Febrian Setianto f48ae18630
Use Pre-existing Floating IP for Bastion (#8214)
* use pre-existing floating IP for bastion

* document bastion_fips in readme
2021-11-19 07:58:52 -08:00
Łukasz Żułnowski 83e0b786d4
Fix wrong baseurl for centos extra repo for Oracle Linux (#8208) 2021-11-18 23:44:51 -08:00
Cristian Calin acd5185ad4
Fix fedora reset (#8205)
* Reset: Fedora uses NetworkManager

* CI: test reset on fedora
2021-11-18 16:46:51 -08:00
Mathieu Parent 0263c649f4
Allow to scrape etcd metrics using a service (#8203)
Signed-off-by: Mathieu Parent <math.parent@gmail.com>
2021-11-17 23:53:01 -08:00
Florian Ruynat 8176e9155b
Add cristicalin as an official reviewer (#8201) 2021-11-16 14:02:45 -08:00
Lubos Mercl 424163c7d3
add gce support (#8179)
Author:    lmercl <lubos.mercl@gmail.com>
Date:      Wed Nov 10 15:30:04 2021 +0000

fix markdown
2021-11-16 08:58:28 -08:00
IKRozhkov 2c87170ccf
Allow setting 'auto-assign' property to 'false' for default IP pool (Metallb addon) (#8193)
* add metallb auto-assign property for main IP range & update addons.yml for sample inventory

* add new line at the end of file roles\kubernetes-apps\metallb\defaults\main.yml

* set default value for matallb_auto_assign = true
2021-11-16 05:06:27 -08:00
zhengtianbao 02322c46de
Remove helm duplicate check (#8196) 2021-11-15 12:50:48 -08:00
Cristian Calin 28b5281c45
Python: bring back python 2.7 support for ansible 2.9 in supported EL distributions (#8192) 2021-11-15 08:06:48 -08:00
EDGsheryl 4d79a55904
Remove extra parameter kube_proxy_remove (#8158)
Signed-off-by: EDGsheryl <edgsheryl@gmail.com>
2021-11-15 00:02:48 -08:00
Samuel Liu 027cbefb87
change krew uri to krew_download_url (#8190) 2021-11-14 12:08:47 -08:00
zhengtianbao a08d82d94e
calico add support for container ip forwarding setting (#8184) 2021-11-12 19:06:46 -08:00
zhengtianbao 5f1456337b
Fix krew auto completion command not found at lower version (#8185) 2021-11-12 17:04:46 -08:00
Lars Larsson 6eeb4883af
Fixes various issues in vSphere Terraform code (#8178)
* Fixes various issues in vSphere Terraform code

Provided to address various shortcomings and to fix the following
issue in upstream Kubespray:

https://github.com/kubernetes-sigs/kubespray/issues/8176

* Resolves Terraform formatting issues

* Sets default prefix to human-readable name

* Documents new default prefix in README
2021-11-12 11:40:29 -08:00
Ajarmar b5a5478a8a
Added tolerations for cinder-csi-nodeplugin DaemonSet (#8137) 2021-11-11 11:48:07 -08:00
Cristian Calin 0d0468e127
Exercise multiple ansible versions in CI (#8172)
* Ansible: separate requirements files for supported ansible versions

* Ansible: allow using ansible 2.11

* CI: Exercise Ansible 2.9 and Ansible 2.11 in a basic AIO CI job

* CI: Allow running a reset test outside of idempotency tests and running it in stage1

* CI: move ubuntu18-calico-aio job to stage2 and relay only on ubuntu20 with the variously supported ansible versions for stage1

* CI: add capability to install collections or roles from ansible-galaxy to mitigate missing behavior in older ansible versions
2021-11-10 16:11:50 -08:00
Cristian Calin b7ae4a2cfd
Kata-Containers: Fix kata-containers runtime (#8068)
* Kata-containes: Fix for ubuntu and centos sometimes kata containers fail to start because of access errors to /dev/vhost-vsock and /dev/vhost-net

* Kata-containers: use similar testing strategy as gvisor

* Kata-Containers: adjust values for 2.2.0 defaults

Make CI tests actually pass

* Kata-Containers: bump to 2.2.2 to fix sandbox_cgroup_only issue
2021-11-09 10:01:48 -08:00
Cristian Calin 039205560a
nodelocaldns: allow a secondary pod for nodelocaldns for local-HA (#8100)
* nodelocaldns: allow a secondary pod for nodelocaldns for local-HA

* CI: add job to test nodelocaldns secondary
2021-11-09 09:57:47 -08:00
Cristian Calin 801268d5c1
containerd: upgrade versions 1.4.11 and 1.5.7 and make 1.4.11 the default (#8129) 2021-11-09 06:59:47 -08:00
zhengtianbao 46c536d261
Add krew auto completion (#8171) 2021-11-09 02:43:39 -08:00
Cristian Calin 4a8757161e
Docker: replace the use of containerd_version with docker_containerd_version to avoid causing conflicts when bumping containerd_version (#8130) 2021-11-08 15:56:49 -08:00
zhengtianbao 65540c5771
krew: update to v0.4.2 (#8168)
krew release urls changed since v0.4.2, clearly OS type and arch inside the filename.

from:
  https://github.com/kubernetes-sigs/krew/releases/download/v0.4.1/krew.tar.gz
to:
  https://github.com/kubernetes-sigs/krew/releases/download/v0.4.2/krew-linux_amd64.tar.gz

define `host_os` like `host_architecture` determine which OS is krew
installed at.
2021-11-08 02:54:59 -08:00
Max Gautier 6c1ab24981
Limit kubectl delete node to k8s nodes (#8101)
* Limit kubectl delete node to k8s nodes

This avoids the use of `kubectl delete node` when removing etcd nodes
which are not part of the cluser (separate etcd)

* Take errors into account when deleting node

There should not be error now that we're limiting the deletion to nodes
actually in the cluster

* Retrying on error
2021-11-08 02:22:58 -08:00
Hyojun Jeon 61c2ae5549
Add vxlanEnabled spec in FelixConfiguration (#8167) 2021-11-08 00:06:52 -08:00
zhengtianbao 04711d3b00
Replace path_join to support Ansible 2.9 (#8160) 2021-11-08 00:00:52 -08:00
Kenichi Omichi cb7c30a4f1
Fix cloud_provider check (#8164)
This fixes the preinstall check for cloud_provider option based on
inventory/sample/group_vars/all/all.yml
2021-11-07 23:48:52 -08:00
Álvaro Torres Cogollo 8922c45556
Added ArgoCD kubernetes-app (#7895)
* Added ArgoCD kubernetes-app

* Update argocd_version to latest
2021-11-07 02:22:51 -08:00
Emin AKTAS 58390c79d0
Bump crun version 1.2 to 1.3 (#8162)
Signed-off-by: Emin Aktaş <eminaktas34@gmail.com>
Co-authored-by: Yasin Taha Erol <yasintahaerol@gmail.com>
Co-authored-by: Necatican Yıldırım <necaticanyildirim@gmail.com>

Co-authored-by: Yasin Taha Erol <yasintahaerol@gmail.com>
Co-authored-by: Necatican Yıldırım <necaticanyildirim@gmail.com>
2021-11-06 02:26:50 -07:00
Antoine Gatineau b7eb1cf936
cert-manager: add trusted internal ca when configured (#8135)
* cert-manager: add trusted internal ca when configured

* wrong check for inventory variable

* Update documentation
2021-11-05 09:43:52 -07:00
Pasquale Toscano 6e5b9e0ebf
Fix Kubelet and Containerd when using cgroupfs as cgroup driver (#8123) 2021-11-05 07:59:54 -07:00
Marcus Fenner c94291558d
Fix containerd install for fcos (#8107)
* Fix containerd install for fcos

* rm orphaned runc and containerd binaries
2021-11-05 07:53:53 -07:00
Cristian Calin 8d553f7e91
Mitogen: deprecate the use of mitogen and remove coverage from CI (#8147) 2021-11-05 00:57:52 -07:00
Cristian Calin a0be7f0e26
heketi: fix deployment logic that was broken by the ansible 3.4 upgrade (#8118) 2021-11-04 13:10:23 -07:00
Florian Ruynat 1c3d082b8d
fix calico crds hashes for 3.20.2 (#8157) 2021-11-04 10:38:04 -07:00
Cristian Calin 2ed211ba15
Fix-CI: python was upgraded in CI to 3.10 and pathlib is now included in python base making this dependency break the CI (#8153) 2021-11-03 12:52:32 -07:00
Florian Ruynat 1161326b54 Add unzip to dockerfile, used in CI 2021-11-02 11:53:41 -07:00
Florian Ruynat d473a6d442 Update kubespray version following 2.17.x release 2021-11-02 11:53:41 -07:00
Erkan Zileli 8d82033bff
fix(doc): update typo (#8148)
I guess `kubernetes-the-hard-way` should be `kubernetes-the-kubespray-way` because of recently created network name is `kubernetes-the-kubespray-way`.
2021-11-02 01:16:58 -07:00
zhengtianbao 9d4cdb7b02
Ensure addon-resizer 1.8.11 only effective at arch amd64. (#8144)
* Ensure addon-resizer 1.8.11 only effective at arch amd64.

k8s.gcr.io/addon-resizer:1.8.11 returns the amd64 image which is not executable at arm64.

Disable addon-resizer when the platform is not amd64.

When metrics-server upgrade and use addon-resizer:2.3, then revert this
commit and `image_arch` will determine the `addon_resizer_image_tag`.

* Add metrics_server_resizer architectures check
2021-11-01 08:21:19 -07:00
Florian Ruynat b353e062c7 Update default k8s version to 1.22.3 2021-10-29 10:43:44 -07:00
Florian Ruynat d8f9b9b61f Update hashes for version v1.20.12/v1.21.6/v1.22.3 2021-10-29 10:43:44 -07:00
Sergey 0b441ade2c
nginx ingress controller should watch kind:ingress without class (#8128) 2021-10-28 11:48:59 -07:00
Krystian Młynek 6f6fad5a16
Calico: add missing verbs in ClusterRole (#8136) 2021-10-28 11:11:01 -07:00
brainfair 465ffa3c9f
Weave: add extra_args for weave-npc (#8140)
* add weave_npc_extra_args in template

* add defaults weave_npc_extra_args

* add sample for weave_npc_extra_args
2021-10-28 08:58:27 -07:00
vatech_seungjin 539c9e0d99
added hirsute in restart network (#8134)
restarting network in ubuntu 21.04 fails and checked the restart menu and found that hirsute was missing in the argument : )
2021-10-27 15:19:10 -07:00
irizzant 649f962ac6
Metrics-server Deployment has incongruencies in resources requests/limits (#8088)
* fix(metrics-server): update defaults

* fix(metrics-server): typo error
2021-10-27 15:15:11 -07:00
Gheorghe Isak 16bdb3fe51
set check_mode to false (#8133) 2021-10-26 19:36:37 -07:00
Sébastien Masset 7c3369e1b9
Fixed default DNS min replica for single node clusters (#8112) 2021-10-26 16:03:46 -07:00
Florian Ruynat 9eacde212f
Fix quorum check when recovering broken etcd cluster (#8126) 2021-10-26 15:23:09 -07:00
Florian Ruynat 331647f4ab
Remove deprecated Ambassador ingress code (#8086) 2021-10-26 15:19:09 -07:00
Mohamed Zaian c2d4822c38
nginx-ingress: bump up version to 1.0.4 in the README (#8124)
* nginx-ingress: bump to 1.0.4

* Disable builtin ssl_session_cache solving the problem with OpenSSL consuming memory.
* Print warning only instead of error if no IngressClass permission is available.

* nginx-ingress: bump to 1.0.4 in the README
2021-10-25 03:38:23 -07:00
Cristian Calin 3c30be1320
cert-manager: update docs to reflect 1.5.x links (#8117) 2021-10-25 03:14:23 -07:00
Mohamed Zaian d8d01bf5aa
nginx-ingress: bump to 1.0.4 (#8114)
* Disable builtin ssl_session_cache solving the problem with OpenSSL consuming memory.
* Print warning only instead of error if no IngressClass permission is available.
2021-10-24 15:34:22 -07:00
Julio H Morimoto d42b7228c2
Convert numbers to string for calico's inventory check. (#8120)
Fix https://github.com/kubernetes-sigs/kubespray/issues/8119

Signed-off-by: Julio Morimoto <julio@morimoto.net.br>
2021-10-24 11:42:21 -07:00
Damian Szeluga 4db057e9c2
Allow changing metallb default pool name (#8111) 2021-10-22 09:38:39 -07:00
Cristian Calin ea8e2fc651
containerd: download containerd from upstream instead of using distro specific packages (#7970)
* Containerd: download containerd from upstream instead of using distro specific packages

split runc download to separate role
make bootstrap-os role deploy container-selinux and seccomp libraries
clean up package manager provided containerd
move variables to docker role that are no longer common with containerd

* Containerd: make molecule testing more relevant

* replace ubuntu18 with ubuntu20
* add centos8 and debian11 to molecule tests
* run kubernetes/preinstall role to ensure relevancy
  of test including dependency packages

* CI: adjust test scenarios for downloaded containerd
2021-10-20 08:47:58 -07:00
Utku Özdemir 10c30ea5b1
Add fallback to node drain using --disable-eviction flag (#8094)
* Add fallback to node drain using --disable-eviction flag

Signed-off-by: Utku Ozdemir <uoz@protonmail.com>

* Move drain fallback tasks to separate file

Signed-off-by: Utku Ozdemir <uoz@protonmail.com>

* Add delegate_facts to fix the drain fallback

Signed-off-by: Utku Ozdemir <uoz@protonmail.com>

* Fix ansible-lint error

Signed-off-by: Utku Ozdemir <uoz@protonmail.com>

* Move drain fallback into block

Signed-off-by: Utku Ozdemir <uoz@protonmail.com>
2021-10-20 00:51:58 -07:00
jayonlau 84b56d23a4
Add jayonlau to reviewers (#8083) 2021-10-19 17:49:57 -07:00
Kenichi Omichi 19d07a4f2e
Fix ownership related to Calico (#8072)
kube-bench scan outputs warning related to Calico like:

* text: "Ensure that the Container Network Interface file
  permissions are set to 644 or more restrictive (Manual)"
* text: "Ensure that the Container Network Interface file
  ownership is set to root:root (Manual)"

This fixes these warnings.
2021-10-19 17:35:57 -07:00
Cristian Calin 6a5b87dda4
netchecker: update images to 1.2.2 from Mirantis (#8074)
* netchecker: update images to 1.2.2 from Mirantis which is slightly less ancinet than the l23networks images

* Netchecker: use local etcd instead of kubernetes v1beta1 crds which are no longer suported by kube 1.22+
2021-10-19 10:17:04 -07:00
Omar Aloraini 6aac59394e
Rocky Linux support (#8095)
* Add Rocky as a known OS

* Make sure Rocky includes bootstrap-centos.yml

* Update docs with Rocky Linux

* Rocky Linux wireguard and EPEL

* Rocky Linux in the list of supported distributions
2021-10-19 08:29:04 -07:00
Florian Ruynat f147163b24
Up dashboard version to 2.4.0 - fix forgotten kubeovn version (#8085) 2021-10-15 05:40:54 -07:00
Florian Ruynat 16bf3549c1 Update kube-ovn to 1.8.1 2021-10-14 19:42:54 -07:00
Florian Ruynat b912dafd7a Update multus to 3.8.0 2021-10-14 19:42:54 -07:00
efrikin 8b3481f511
Add molecule tests for roles (#8080)
* Add molecule tests for bastion-ssh-config

* Add molecule tests for adduser

* Update .gitignore
2021-10-14 18:46:54 -07:00
Olivier Levitt 7019c2685d
Increase cpu limit to prevent throttling (#8076) 2021-10-14 11:03:36 -07:00
Mohamed Zaian d18cc38586
Replcae deprecated --delete-local-data in pre-remove/pre-upgrade tasks (#8081) 2021-10-14 02:25:19 -07:00
Cristian Calin cee481f63d
cert-manager: upgrade to 1.5.4 (#8069)
* cert-manager: update to 1.5.4

* cert-manager: remove outdated guidelines on creating an initial ClusterIssuer
2021-10-12 09:17:47 -07:00
Max Gautier e4c8c7188e
etcd: deploy container engine if needed (#7532)
If the etcd cluster is separate and the etcd_deployment_type is "host",
there is no need for a container engine on the etcd nodes

Do not rely on a 'default(true)' filter, but define a proper default in
kubespray-defaults depending on etcd deployment method and if internal
or external etcd is used
2021-10-12 00:31:47 -07:00
rtsp 6c004efd5f
cert_manager: Remove deprecated ClusterIssuer and its Secret (#8064) 2021-10-11 09:40:40 -07:00
Necatican Yıldırım 1a57780a75
Add kubeadm_join_phases_skip variable (#8067)
* Add kubeadm_join_phases_skip variable

* Update kubeadm_join_phases_skip comment

Co-authored-by: Cristian Calin <6627509+cristicalin@users.noreply.github.com>

* Add kubeadm_join_phases_skip_default variable to follow the same logic with kubeadm_init_phases_skip

Co-authored-by: Cristian Calin <6627509+cristicalin@users.noreply.github.com>
2021-10-11 09:36:41 -07:00
Maciej Wereski ce25e4aa21
MetalLB: update to v0.10.3 (#8071)
Signed-off-by: Maciej Wereski <m.wereski@partner.samsung.com>
2021-10-11 08:54:40 -07:00
Rene Luria ef4044b62f
csi_driver / cinder: implement rescan-on-resize variable via (#8057)
cinder_csi_rescan_on_resize
2021-10-11 02:14:40 -07:00
Florian Ruynat 9ffe5940fe
Remove TF 0.14/0.15 support - Add TF 1.x support only (#8062) 2021-10-08 09:01:06 -07:00
Florian Ruynat c8d9afce1a
Update a bunch of tools (#8061) 2021-10-08 09:00:59 -07:00
Florian Ruynat 285983a555
Update docker version to 20.10.9 - CVE fixes (#8060) 2021-10-08 08:56:58 -07:00
Cristian Calin ab4356aa69
Calico: bump default version to 3.20.2 (#8058) 2021-10-07 12:59:33 -07:00
Fredrik Liv e87d4e9ce3
Added terraform script for Hetzner cloud (#8053) 2021-10-07 10:11:46 -07:00
Maxim Pogozhiy 5fcf047191
local-volume-provisioner quay.io -> k8s.gcr.io (#8054) 2021-10-06 17:08:41 -07:00
Florian Ruynat c68fb81aa7
Clarify documentation for integration.md (#8049) 2021-10-06 16:44:41 -07:00
Rene Luria e707f78899
After upgrade, allow cilium to be back before uncordoning (#7978)
* After upgrade, allow cilium to be back before uncordoning

* add eol

* use kube_config_dir variable
resolves https://github.com/kubernetes-sigs/kubespray/pull/7978#discussion_r721685549
2021-10-05 12:56:58 -07:00
Ilya Margolin 41e0ca3f85
Move kube_feature_gates to kubelet config (#8048)
to remove deprecation warning:

> Flag --feature-gates has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag.
2021-10-05 06:07:10 -07:00
Orhun Parmaksız c5c10067ed
Update kubespray version to 2.17.x in first cluster guide (#8043) 2021-10-04 00:09:07 -07:00
Iago Santos 43958614e3
Fix kubespray flatcar ansible_os_family and ansible_distribution (#8029)
Closes https://github.com/kubernetes-sigs/kubespray/issues/8028

Signed-off-by: Iago Santos <iago.santos.pardo@adfinis.com>
2021-10-01 09:11:23 -07:00
rtsp af04906b51
Ensure apparmor is installed (#8036)
Kubespray deployment failed when using containerd backend on nodes that apparmor was not installed or previously removed. This PR ensure apparmor is installed by adding it into required_pkgs var.
2021-09-29 23:52:08 -07:00
Cristian Calin c7e17688b9
gVisor: bump release to 20210921 version (#8015)
* gVisor: bump release to 20210921 version

* gVisor: drop support for 20210518.0 version
2021-09-29 11:35:20 -07:00
Olivier Lemasle ac76840c5d
Upgrade ruamel.yaml.clib to work with Python 3.10 (#8034)
ruamel.yaml.clib did not build with the upcoming Python 3.10.

Cf. https://sourceforge.net/p/ruamel-yaml-clib/tickets/5/

ruamel.yaml.clib==0.2.4 fixes the issue. It does not work
with Python 3.7 (cf https://sourceforge.net/p/ruamel-yaml-clib/tickets/6/)
but currently Kubespray requires Python >= 3.9.
2021-09-29 07:04:49 -07:00
Peter Pan f5885d05ea
In CentOS 8.x Docker install Step: remove podman when existing (#8016) 2021-09-29 06:32:48 -07:00
Nicolas Goudry af949cd967
Fix invalid documentation links (#7692)
* Fix invalid link to Ansible documentation

* Fix invalid link to mitogen doc page

* Fix invalid link to calico doc page

* Fix all invalid links to doc pages
2021-09-28 09:58:43 -07:00
Frank Filippone eee2eb11d8
Update weave template to match source for 2.8.1 (#8013) 2021-09-28 09:16:43 -07:00
Kenichi Omichi 8d3961edbe
Add metrics_server_resizer option (#8018)
The addon-resizer container can reduce resource limits of cpu and
memory of metrics-server container in the pod, and that caused
OOMKilled.
In addition, the original metrics-server manifest doesn't contain
the addon-resizer container as [1].
So this adds metrics_server_resizer option to control the addon-resizer
container deployment and the default value is false to make it stable
for most environments.

[1]: 527679e5e8/manifests/base/deployment.yaml
2021-09-28 00:02:42 -07:00
Marcos Lorenzo 4c5328fd1f
Determine root filesistem device and partition before running growpart (#8024) 2021-09-27 23:58:42 -07:00
David Louks 1472528f6d
check if 'plugins' key exists in calico_cni_config object (#7717)
* check if 'plugins' key exists in calico_cni_config object

* fix whitespace linting error

* fixed when list indentation
2021-09-27 11:04:20 -07:00
Victor Morales 9416c9aa86
Enable stable and edge Docker CLI versions (#8019) 2021-09-27 10:44:19 -07:00
Kenichi Omichi da92c7e215
Add proxy for subscription-manager (#8012)
If using proxy, it is necessary to configure it before running
"subscription-manager status" command.
This adds the step.
2021-09-27 08:47:35 -07:00
Kenichi Omichi d27cf375af
Remove allowPrivilegeEscalation from metrics-server (#8014)
"allowPrivilegeEscalation: false" blocks deploying metrics-server
on CentOS7. In addition, the original metrics-server manifest doesn't
contain it as [1]. This removes it.

[1]: 527679e5e8/manifests/base/deployment.yaml
2021-09-27 08:43:36 -07:00
Victor Morales 432a312a35
Enable stable and edge containerd versions (#8020) 2021-09-27 08:11:35 -07:00
Cristian Calin 3a6230af6b
Kata-Containers: update versions 2.2.0 (default) and 2.1.1 (#8017)
* Kata-Containers: add 2.2.0 hashes and make default

* Kata-Containers: replace 2.1.0 with bugfix version 2.1.1

* Kata-Containers: move to q35 a more modern VM architecture as 'pc' is removed in 2.2.0
2021-09-27 08:07:35 -07:00
Florian Ruynat ecd267854b
Move ovn4nvf crd from v1beta1 to v1 (#8006) 2021-09-27 01:18:22 -07:00
Hugo Blom ac846667b7
Check if openstack application credentials are empty since they always exists (#8021) 2021-09-27 01:14:22 -07:00
Cristian Calin 33146b9481
CI: Add Calico eBPF in HA mode test (#7710)
* Sample-Inventory: add sample for calico_bpf_enabled

* Calico-Doc: note about CONFIG_NET_SCHED for eBPF support

* CI: Add Calico eBPF in HA mode test
2021-09-24 09:57:23 -07:00
rtsp 4bace2491d
Ensure apparmor is installed (#8011)
Kubespray deployment failed when using containerd backend on nodes that apparmor was not installed or previously removed. This PR ensure apparmor is installed by adding it into required_pkgs var.
2021-09-24 07:55:23 -07:00
Kenichi Omichi 469b3ec525
Add definition check of disable_service_firewall (#7995)
When not specifying disable_service_firewall, the task is failed.
This adds the definition check.
2021-09-24 02:31:23 -07:00
Maxim Pogozhiy 22017b7ff0
kube-router 1.3.0 -> 1.3.1 (#8007) 2021-09-23 13:42:55 -07:00
Florian Ruynat 88c11b5946
Revert "etcd: enable v2 api only if needed (#8001)" (#8008)
This reverts commit c0e1211abe.
2021-09-23 10:43:14 -07:00
Kenichi Omichi 843252c968
Use kube_config_dir for kubeconfig (#7996)
The path of kubeconfig should be configurable, and its default value
is /etc/kubernetes/admin.conf. Most paths of the file are configurable
but some were not. This make those configurable.
2021-09-23 10:19:13 -07:00
Eric Lake ddea79f0f0
Issue 8004: Fix typha prometheus (#8005)
The typha prometheus settings were in the `volumeMounts` section of the
spec and not in the `envs` section. This was cauing the deployment to
fail because it was looking for a volumeMount.

```
failed: [controller-001.a2.da.dev.logdna.net] (item=calico-typha.yml) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": true, "checksum": "598ac79530749e8e2110793b53fc49ac208e7130", "dest": "/etc/kubernetes/calico-typha.yml", "diff": [], "failed": false, "gid": 0, "group": "root", "invocation": {"module_args": {"_original_basename": "calico-typha.yml.j2", "attributes": null, "backup": false, "checksum": "598ac79530749e8e2110793b53fc49ac208e7130", "content": null, "delimiter": null, "dest": "/etc/kubernetes/calico-typha.yml", "directory_mode": null, "follow": false, "force": true, "group": null, "local_follow": null, "mode": null, "owner": null, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": "/home/core/.ansible/tmp/ansible-tmp-1632349768.56-75434-32452975679246/source", "unsafe_writes": null, "validate": null}}, "item": {"file": "calico-typha.yml", "name": "calico", "type": "typha"}, "md5sum": "53c00ac7f562cf9ecbbfd27899ea066d", "mode": "0644", "owner": "root", "size": 5378, "src": "/home/core/.ansible/tmp/ansible-tmp-1632349768.56-75434-32452975679246/source", "state": "file", "uid": 0}, "msg": "error running kubectl (/opt/bin/kubectl --namespace=kube-system apply --force --filename=/etc/kubernetes/calico-typha.yml) command (rc=1), out='service/calico-typha unchanged\n', err='error: error validating \"/etc/kubernetes/calico-typha.yml\": error validating data: [ValidationError(Deployment.spec.template.spec.containers[0].volumeMounts[2]): unknown field \"value\" in io.k8s.api.core.v1.VolumeMount, ValidationError(Deployment.spec.template.spec.containers[0].volumeMounts[2]): missing required field \"mountPath\" in io.k8s.api.core.v1.VolumeMount, ValidationError(Deployment.spec.template.spec.containers[0].volumeMounts[3]): unknown field \"value\" in io.k8s.api.core.v1.VolumeMount, ValidationError(Deployment.spec.template.spec.containers[0].volumeMounts[3]): missing required field \"mountPath\" in io.k8s.api.core.v1.VolumeMount]; if you choose to ignore these errors, turn validation off with --validate=false\n'"}
```
2021-09-23 08:37:22 -07:00
Max Gautier c0e1211abe
etcd: enable v2 api only if needed (#8001)
* etcd: enable v2 api only if needed

Only enable v2 API if we have a consumer (flannel)
This reduce the exposed surface for etcd.

* Fix bad group name
2021-09-22 12:36:32 -07:00
Florian Ruynat c8d7f000c9
Remove k8s hooks for versions prior to 1.20 (#7998) 2021-09-22 10:32:01 -07:00
Léopold Jacquot 598f178054
Fix cilium operator metrics activation (#8000) 2021-09-22 10:00:02 -07:00
Florian Ruynat 6f8b24f367 Allow failure in cert manager job 2021-09-22 09:50:01 -07:00
Florian Ruynat 5d1b34bdcd Move min k8s version to 1.20 2021-09-22 09:50:01 -07:00
Florian Ruynat 8efde799e1 Update kubernetes version to 1.22.2 2021-09-22 09:50:01 -07:00
Florian Ruynat 96b61a5f53 Update KUBE_VERSION in gitlab-ci following release 2021-09-22 09:50:01 -07:00
Cristian Calin a517a8db01
Drop chech for kubelet_shutdown_grace_period (#7993)
and kubelet_shutdown_grace_period_critical_pods as ansible cannot do
sane time interval calculations
2021-09-21 18:34:00 -07:00
Wang Zhen 2211504790
Fix k8s-certs-renew cp path (#7992)
Signed-off-by: Wang Zhen <lazybetrayer@gmail.com>
2021-09-21 00:36:22 -07:00
Cristian Calin fb8662ec19
Calico: update versions 3.20.1, 3.19.3 (#7984)
* make Calico 3.20.1 the default version
* drop Calico 3.17.x support
2021-09-20 17:40:23 -07:00
Cristian Calin 6f7911264f
Calico: make calico_min_version check relevant (#7939)
* Calico: make calico_min_version check relevant

* Calico: only check currently installed version against the oldest supported version by the previous release
2021-09-20 07:58:09 -07:00
Cristian Calin ae44aff330
Calico: increase calico node probe timeouts and allow tunning (#7981) 2021-09-17 16:08:07 -07:00
918 changed files with 30496 additions and 35064 deletions

16
.gitignore vendored
View file

@ -3,7 +3,10 @@
**/vagrant_ansible_inventory
*.iml
temp
contrib/offline/offline-files
contrib/offline/offline-files.tar.gz
.idea
.vscode
.tox
.cache
*.bak
@ -11,16 +14,19 @@ temp
*.tfstate.backup
.terraform/
contrib/terraform/aws/credentials.tfvars
.terraform.lock.hcl
/ssh-bastion.conf
**/*.sw[pon]
*~
vagrant/
plugins/mitogen
deploy.sh
# Ansible inventory
inventory/*
!inventory/local
!inventory/sample
!inventory/c12s-sample
inventory/*/artifacts/
# Byte-compiled / optimized / DLL files
@ -99,3 +105,13 @@ target/
# virtualenv
venv/
ENV/
# molecule
roles/**/molecule/**/__pycache__/
# macOS
.DS_Store
# Temp location used by our scripts
scripts/tmp/
tmp.md

View file

@ -8,7 +8,7 @@ stages:
- deploy-special
variables:
KUBESPRAY_VERSION: v2.16.0
KUBESPRAY_VERSION: v2.20.0
FAILFASTCI_NAMESPACE: 'kargo-ci'
GITLAB_REPOSITORY: 'kargo-ci/kubernetes-sigs-kubespray'
ANSIBLE_FORCE_COLOR: "true"
@ -16,6 +16,7 @@ variables:
TEST_ID: "$CI_PIPELINE_ID-$CI_BUILD_ID"
CI_TEST_VARS: "./tests/files/${CI_JOB_NAME}.yml"
CI_TEST_REGISTRY_MIRROR: "./tests/common/_docker_hub_registry_mirror.yml"
CI_TEST_SETTING: "./tests/common/_kubespray_test_settings.yml"
GS_ACCESS_KEY_ID: $GS_KEY
GS_SECRET_ACCESS_KEY: $GS_SECRET
CONTAINER_ENGINE: docker
@ -26,19 +27,20 @@ variables:
ANSIBLE_INVENTORY: ./inventory/sample/${CI_JOB_NAME}-${BUILD_NUMBER}.ini
IDEMPOT_CHECK: "false"
RESET_CHECK: "false"
REMOVE_NODE_CHECK: "false"
UPGRADE_TEST: "false"
MITOGEN_ENABLE: "false"
ANSIBLE_LOG_LEVEL: "-vv"
RECOVER_CONTROL_PLANE_TEST: "false"
RECOVER_CONTROL_PLANE_TEST_GROUPS: "etcd[2:],kube_control_plane[1:]"
TERRAFORM_14_VERSION: 0.14.11
TERRAFORM_15_VERSION: 0.15.5
TERRAFORM_VERSION: 1.0.8
ANSIBLE_MAJOR_VERSION: "2.11"
before_script:
- ./tests/scripts/rebase.sh
- update-alternatives --install /usr/bin/python python /usr/bin/python3 1
- python -m pip uninstall -y ansible
- python -m pip install -r tests/requirements.txt
- python -m pip uninstall -y ansible ansible-base ansible-core
- python -m pip install -r tests/requirements-${ANSIBLE_MAJOR_VERSION}.txt
- mkdir -p /.ssh
.job: &job
@ -79,3 +81,4 @@ include:
- .gitlab-ci/terraform.yml
- .gitlab-ci/packet.yml
- .gitlab-ci/vagrant.yml
- .gitlab-ci/molecule.yml

View file

@ -14,7 +14,7 @@ vagrant-validate:
stage: unit-tests
tags: [light]
variables:
VAGRANT_VERSION: 2.2.15
VAGRANT_VERSION: 2.2.19
script:
- ./tests/scripts/vagrant-validate.sh
except: ['triggers', 'master']
@ -23,9 +23,8 @@ ansible-lint:
extends: .job
stage: unit-tests
tags: [light]
# lint every yml/yaml file that looks like it contains Ansible plays
script: |-
grep -Rl '^- hosts: \|^ hosts: ' --include \*.yml --include \*.yaml . | xargs -P 4 -n 25 ansible-lint -v
script:
- ansible-lint -v
except: ['triggers', 'master']
syntax-check:
@ -53,7 +52,7 @@ tox-inventory-builder:
- ./tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible
- python -m pip uninstall -y ansible ansible-base ansible-core
- python -m pip install -r tests/requirements.txt
script:
- pip3 install tox
@ -69,6 +68,20 @@ markdownlint:
script:
- markdownlint $(find . -name '*.md' | grep -vF './.git') --ignore docs/_sidebar.md --ignore contrib/dind/README.md
check-readme-versions:
stage: unit-tests
tags: [light]
image: python:3
script:
- tests/scripts/check_readme_versions.sh
check-typo:
stage: unit-tests
tags: [light]
image: python:3
script:
- tests/scripts/check_typo.sh
ci-matrix:
stage: unit-tests
tags: [light]

86
.gitlab-ci/molecule.yml Normal file
View file

@ -0,0 +1,86 @@
---
.molecule:
tags: [c3.small.x86]
only: [/^pr-.*$/]
except: ['triggers']
image: quay.io/kubespray/vagrant:$KUBESPRAY_VERSION
services: []
stage: deploy-part1
before_script:
- tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible ansible-base ansible-core
- python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh
script:
- ./tests/scripts/molecule_run.sh
after_script:
- chronic ./tests/scripts/molecule_logs.sh
artifacts:
when: always
paths:
- molecule_logs/
# CI template for periodic CI jobs
# Enabled when PERIODIC_CI_ENABLED var is set
.molecule_periodic:
only:
variables:
- $PERIODIC_CI_ENABLED
allow_failure: true
extends: .molecule
molecule_full:
extends: .molecule_periodic
molecule_no_container_engines:
extends: .molecule
script:
- ./tests/scripts/molecule_run.sh -e container-engine
when: on_success
molecule_docker:
extends: .molecule
script:
- ./tests/scripts/molecule_run.sh -i container-engine/cri-dockerd
when: on_success
molecule_containerd:
extends: .molecule
script:
- ./tests/scripts/molecule_run.sh -i container-engine/containerd
when: on_success
molecule_cri-o:
extends: .molecule
stage: deploy-part2
script:
- ./tests/scripts/molecule_run.sh -i container-engine/cri-o
when: on_success
# Stage 3 container engines don't get as much attention so allow them to fail
molecule_kata:
extends: .molecule
stage: deploy-part3
allow_failure: true
script:
- ./tests/scripts/molecule_run.sh -i container-engine/kata-containers
when: on_success
molecule_gvisor:
extends: .molecule
stage: deploy-part3
allow_failure: true
script:
- ./tests/scripts/molecule_run.sh -i container-engine/gvisor
when: on_success
molecule_youki:
extends: .molecule
stage: deploy-part3
allow_failure: true
script:
- ./tests/scripts/molecule_run.sh -i container-engine/youki
when: on_success

View file

@ -2,6 +2,7 @@
.packet:
extends: .testcases
variables:
ANSIBLE_TIMEOUT: "120"
CI_PLATFORM: packet
SSH_USER: kubespray
tags:
@ -22,27 +23,60 @@
allow_failure: true
extends: .packet
packet_ubuntu18-calico-aio:
stage: deploy-part1
extends: .packet_pr
when: on_success
# Future AIO job
# The ubuntu20-calico-aio jobs are meant as early stages to prevent running the full CI if something is horribly broken
packet_ubuntu20-calico-aio:
stage: deploy-part1
extends: .packet_pr
when: on_success
variables:
RESET_CHECK: "true"
packet_ubuntu20-calico-aio-ansible-2_11:
stage: deploy-part1
extends: .packet_periodic
when: on_success
variables:
ANSIBLE_MAJOR_VERSION: "2.11"
RESET_CHECK: "true"
# ### PR JOBS PART2
packet_centos7-flannel-containerd-addons-ha:
packet_ubuntu18-aio-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu20-aio-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu20-calico-aio-hardening:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu18-calico-aio:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu22-aio-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_ubuntu22-calico-aio:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_centos7-flannel-addons-ha:
extends: .packet_pr
stage: deploy-part2
when: on_success
variables:
MITOGEN_ENABLE: "true"
packet_centos8-crio:
packet_almalinux8-crio:
extends: .packet_pr
stage: deploy-part2
when: on_success
@ -51,10 +85,13 @@ packet_ubuntu18-crio:
extends: .packet_pr
stage: deploy-part2
when: manual
variables:
MITOGEN_ENABLE: "true"
packet_ubuntu16-canal-kubeadm-ha:
packet_fedora35-crio:
extends: .packet_pr
stage: deploy-part2
when: manual
packet_ubuntu16-canal-ha:
stage: deploy-part2
extends: .packet_periodic
when: on_success
@ -69,33 +106,31 @@ packet_ubuntu16-flannel-ha:
extends: .packet_pr
when: manual
packet_ubuntu16-kube-router-sep:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_ubuntu16-kube-router-svc-proxy:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_debian10-cilium-svc-proxy:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_debian10-containerd:
packet_debian10-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian10-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
variables:
MITOGEN_ENABLE: "true"
packet_debian11-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_debian11-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_centos7-calico-ha-once-localhost:
stage: deploy-part2
extends: .packet_pr
@ -106,17 +141,32 @@ packet_centos7-calico-ha-once-localhost:
services:
- docker:19.03.9-dind
packet_centos8-kube-ovn:
packet_almalinux8-kube-ovn:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_centos8-calico:
packet_almalinux8-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_fedora34-weave:
packet_rockylinux8-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_rockylinux9-calico:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_almalinux8-docker:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_fedora36-docker-weave:
stage: deploy-part2
extends: .packet_pr
when: on_success
@ -126,14 +176,14 @@ packet_opensuse-canal:
extends: .packet_periodic
when: on_success
packet_ubuntu18-ovn4nfv:
packet_opensuse-docker-cilium:
stage: deploy-part2
extends: .packet_periodic
when: on_success
extends: .packet_pr
when: manual
# ### MANUAL JOBS
packet_ubuntu16-weave-sep:
packet_ubuntu16-docker-weave-sep:
stage: deploy-part2
extends: .packet_pr
when: manual
@ -143,12 +193,18 @@ packet_ubuntu18-cilium-sep:
extends: .packet_pr
when: manual
packet_ubuntu18-flannel-containerd-ha:
packet_ubuntu18-flannel-ha:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_ubuntu18-flannel-containerd-ha-once:
packet_ubuntu18-flannel-ha-once:
stage: deploy-part2
extends: .packet_pr
when: manual
# Calico HA eBPF
packet_almalinux8-calico-ha-ebpf:
stage: deploy-part2
extends: .packet_pr
when: manual
@ -163,37 +219,44 @@ packet_centos7-calico-ha:
extends: .packet_pr
when: manual
packet_centos7-kube-router:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_centos7-multus-calico:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_oracle7-canal-ha:
packet_centos7-canal-ha:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_fedora33-calico:
packet_fedora36-docker-calico:
stage: deploy-part2
extends: .packet_periodic
when: on_success
variables:
RESET_CHECK: "true"
packet_fedora35-calico-selinux:
stage: deploy-part2
extends: .packet_periodic
when: on_success
packet_fedora34-calico-selinux:
packet_fedora35-calico-swap-selinux:
stage: deploy-part2
extends: .packet_periodic
when: on_success
extends: .packet_pr
when: manual
packet_amazon-linux-2-aio:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_fedora34-kube-ovn-containerd:
packet_almalinux8-calico-nodelocaldns-secondary:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_fedora36-kube-ovn:
stage: deploy-part2
extends: .packet_periodic
when: on_success
@ -207,31 +270,46 @@ packet_centos7-weave-upgrade-ha:
when: on_success
variables:
UPGRADE_TEST: basic
MITOGEN_ENABLE: "false"
packet_ubuntu20-calico-etcd-kubeadm-upgrade-ha:
stage: deploy-part3
extends: .packet_periodic
when: on_success
variables:
UPGRADE_TEST: basic
# Calico HA Wireguard
packet_ubuntu20-calico-ha-wireguard:
stage: deploy-part2
extends: .packet_pr
when: manual
variables:
MITOGEN_ENABLE: "true"
packet_debian9-calico-upgrade:
packet_debian11-calico-upgrade:
stage: deploy-part3
extends: .packet_pr
when: on_success
variables:
UPGRADE_TEST: graceful
MITOGEN_ENABLE: "false"
packet_debian9-calico-upgrade-once:
packet_almalinux8-calico-remove-node:
stage: deploy-part3
extends: .packet_pr
when: on_success
variables:
REMOVE_NODE_CHECK: "true"
REMOVE_NODE_NAME: "instance-3"
packet_ubuntu20-calico-etcd-kubeadm:
stage: deploy-part3
extends: .packet_pr
when: on_success
packet_debian11-calico-upgrade-once:
stage: deploy-part3
extends: .packet_periodic
when: on_success
variables:
UPGRADE_TEST: graceful
MITOGEN_ENABLE: "false"
packet_ubuntu18-calico-ha-recover:
stage: deploy-part3

View file

@ -11,6 +11,6 @@ shellcheck:
- cp shellcheck-"${SHELLCHECK_VERSION}"/shellcheck /usr/bin/
- shellcheck --version
script:
# Run shellcheck for all *.sh except contrib/
- find . -name '*.sh' -not -path './contrib/*' -not -path './.git/*' | xargs shellcheck --severity error
# Run shellcheck for all *.sh
- find . -name '*.sh' -not -path './.git/*' | xargs shellcheck --severity error
except: ['triggers', 'master']

View file

@ -53,92 +53,51 @@
# Cleanup regardless of exit code
- chronic ./tests/scripts/testcases_cleanup.sh
tf-0.15.x-validate-openstack:
tf-validate-openstack:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_15_VERSION
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
tf-0.15.x-validate-packet:
tf-validate-metal:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_15_VERSION
PROVIDER: packet
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: metal
CLUSTER: $CI_COMMIT_REF_NAME
tf-0.15.x-validate-aws:
tf-validate-aws:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_15_VERSION
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: aws
CLUSTER: $CI_COMMIT_REF_NAME
tf-0.15.x-validate-exoscale:
tf-validate-exoscale:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_15_VERSION
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: exoscale
tf-0.15.x-validate-vsphere:
tf-validate-vsphere:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_15_VERSION
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: vsphere
CLUSTER: $CI_COMMIT_REF_NAME
tf-0.15.x-validate-upcloud:
tf-validate-upcloud:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_15_VERSION
PROVIDER: upcloud
CLUSTER: $CI_COMMIT_REF_NAME
tf-0.14.x-validate-openstack:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_14_VERSION
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
tf-0.14.x-validate-packet:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_14_VERSION
PROVIDER: packet
CLUSTER: $CI_COMMIT_REF_NAME
tf-0.14.x-validate-aws:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_14_VERSION
PROVIDER: aws
CLUSTER: $CI_COMMIT_REF_NAME
tf-0.14.x-validate-exoscale:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_14_VERSION
PROVIDER: exoscale
tf-0.14.x-validate-vsphere:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_14_VERSION
PROVIDER: vsphere
CLUSTER: $CI_COMMIT_REF_NAME
tf-0.14.x-validate-upcloud:
extends: .terraform_validate
variables:
TF_VERSION: $TERRAFORM_14_VERSION
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: upcloud
CLUSTER: $CI_COMMIT_REF_NAME
# tf-packet-ubuntu16-default:
# extends: .terraform_apply
# variables:
# TF_VERSION: $TERRAFORM_14_VERSION
# TF_VERSION: $TERRAFORM_VERSION
# PROVIDER: packet
# CLUSTER: $CI_COMMIT_REF_NAME
# TF_VAR_number_of_k8s_masters: "1"
@ -152,7 +111,7 @@ tf-0.14.x-validate-upcloud:
# tf-packet-ubuntu18-default:
# extends: .terraform_apply
# variables:
# TF_VERSION: $TERRAFORM_14_VERSION
# TF_VERSION: $TERRAFORM_VERSION
# PROVIDER: packet
# CLUSTER: $CI_COMMIT_REF_NAME
# TF_VAR_number_of_k8s_masters: "1"
@ -187,10 +146,6 @@ tf-0.14.x-validate-upcloud:
OS_INTERFACE: public
OS_IDENTITY_API_VERSION: "3"
TF_VAR_router_id: "ab95917c-41fb-4881-b507-3a6dfe9403df"
# Since ELASTX is in Stockholm, Mitogen helps with latency
MITOGEN_ENABLE: "false"
# Mitogen doesn't support interpreter discovery yet
ANSIBLE_PYTHON_INTERPRETER: "/usr/bin/python3"
tf-elastx_cleanup:
stage: unit-tests
@ -210,7 +165,7 @@ tf-elastx_ubuntu18-calico:
allow_failure: true
variables:
<<: *elastx_variables
TF_VERSION: $TERRAFORM_15_VERSION
TF_VERSION: $TERRAFORM_VERSION
PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME
ANSIBLE_TIMEOUT: "60"
@ -256,7 +211,7 @@ tf-elastx_ubuntu18-calico:
# environment: ovh
# variables:
# <<: *ovh_variables
# TF_VERSION: $TERRAFORM_14_VERSION
# TF_VERSION: $TERRAFORM_VERSION
# PROVIDER: openstack
# CLUSTER: $CI_COMMIT_REF_NAME
# ANSIBLE_TIMEOUT: "60"

View file

@ -1,22 +1,5 @@
---
molecule_tests:
tags: [c3.small.x86]
only: [/^pr-.*$/]
except: ['triggers']
image: quay.io/kubespray/vagrant:$KUBESPRAY_VERSION
services: []
stage: deploy-part1
before_script:
- tests/scripts/rebase.sh
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible
- python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh
script:
- ./tests/scripts/molecule_run.sh
.vagrant:
extends: .testcases
variables:
@ -32,13 +15,14 @@ molecule_tests:
before_script:
- apt-get update && apt-get install -y python3-pip
- update-alternatives --install /usr/bin/python python /usr/bin/python3 10
- python -m pip uninstall -y ansible
- python -m pip uninstall -y ansible ansible-base ansible-core
- python -m pip install -r tests/requirements.txt
- ./tests/scripts/vagrant_clean.sh
script:
- ./tests/scripts/testcases_run.sh
after_script:
- chronic ./tests/scripts/testcases_cleanup.sh
allow_failure: true
vagrant_ubuntu18-calico-dual-stack:
stage: deploy-part2
@ -59,3 +43,25 @@ vagrant_ubuntu20-flannel:
stage: deploy-part2
extends: .vagrant
when: on_success
allow_failure: false
vagrant_ubuntu16-kube-router-sep:
stage: deploy-part2
extends: .vagrant
when: manual
# Service proxy test fails connectivity testing
vagrant_ubuntu16-kube-router-svc-proxy:
stage: deploy-part2
extends: .vagrant
when: manual
vagrant_fedora35-kube-router:
stage: deploy-part2
extends: .vagrant
when: on_success
vagrant_centos7-kube-router:
stage: deploy-part2
extends: .vagrant
when: manual

View file

@ -1,2 +1,3 @@
---
MD013: false
MD029: false

48
.pre-commit-config.yaml Normal file
View file

@ -0,0 +1,48 @@
---
repos:
- repo: https://github.com/adrienverge/yamllint.git
rev: v1.27.1
hooks:
- id: yamllint
args: [--strict]
- repo: https://github.com/markdownlint/markdownlint
rev: v0.11.0
hooks:
- id: markdownlint
args: [ -r, "~MD013,~MD029" ]
exclude: "^.git"
- repo: local
hooks:
- id: ansible-lint
name: ansible-lint
entry: ansible-lint -v
language: python
pass_filenames: false
additional_dependencies:
- .[community]
- id: ansible-syntax-check
name: ansible-syntax-check
entry: env ANSIBLE_INVENTORY=inventory/local-tests.cfg ANSIBLE_REMOTE_USER=root ANSIBLE_BECOME="true" ANSIBLE_BECOME_USER=root ANSIBLE_VERBOSITY="3" ansible-playbook --syntax-check
language: python
files: "^cluster.yml|^upgrade-cluster.yml|^reset.yml|^extra_playbooks/upgrade-only-k8s.yml"
- id: tox-inventory-builder
name: tox-inventory-builder
entry: bash -c "cd contrib/inventory_builder && tox"
language: python
pass_filenames: false
- id: check-readme-versions
name: check-readme-versions
entry: tests/scripts/check_readme_versions.sh
language: script
pass_filenames: false
- id: ci-matrix
name: ci-matrix
entry: tests/scripts/md-table/test.sh
language: script
pass_filenames: false

View file

@ -6,11 +6,22 @@
It is recommended to use filter to manage the GitHub email notification, see [examples for setting filters to Kubernetes Github notifications](https://github.com/kubernetes/community/blob/master/communication/best-practices.md#examples-for-setting-filters-to-kubernetes-github-notifications)
To install development dependencies you can use `pip install -r tests/requirements.txt`
To install development dependencies you can set up a python virtual env with the necessary dependencies:
```ShellSession
virtualenv venv
source venv/bin/activate
pip install -r tests/requirements.txt
```
#### Linting
Kubespray uses `yamllint` and `ansible-lint`. To run them locally use `yamllint .` and `ansible-lint`
Kubespray uses [pre-commit](https://pre-commit.com) hook configuration to run several linters, please install this tool and use it to run validation tests before submitting a PR.
```ShellSession
pre-commit install
pre-commit run -a # To run pre-commit hook on all files in the repository, even if they were not modified
```
#### Molecule
@ -27,5 +38,9 @@ Vagrant with VirtualBox or libvirt driver helps you to quickly spin test cluster
1. Submit an issue describing your proposed change to the repo in question.
2. The [repo owners](OWNERS) will respond to your issue promptly.
3. Fork the desired repo, develop and test your code changes.
4. Sign the CNCF CLA (<https://git.k8s.io/community/CLA.md#the-contributor-license-agreement>)
5. Submit a pull request.
4. Install [pre-commit](https://pre-commit.com) and install it in your development repo.
5. Addess any pre-commit validation failures.
6. Sign the CNCF CLA (<https://git.k8s.io/community/CLA.md#the-contributor-license-agreement>)
7. Submit a pull request.
8. Work with the reviewers on their suggestions.
9. Ensure to rebase to the HEAD of your target branch and squash un-necessary commits (<https://blog.carbonfive.com/always-squash-and-rebase-your-git-commits/>) before final merger of your contribution.

View file

@ -1,14 +1,18 @@
# Use imutable image tags rather than mutable tags (like ubuntu:18.04)
FROM ubuntu:bionic-20200807
# Use imutable image tags rather than mutable tags (like ubuntu:20.04)
FROM ubuntu:focal-20220531
ARG ARCH=amd64
ARG TZ=Etc/UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt update -y \
&& apt install -y \
libssl-dev python3-dev sshpass apt-transport-https jq moreutils \
ca-certificates curl gnupg2 software-properties-common python3-pip rsync git \
ca-certificates curl gnupg2 software-properties-common python3-pip unzip rsync git \
&& rm -rf /var/lib/apt/lists/*
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - \
&& add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
"deb [arch=$ARCH] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable" \
&& apt update -y && apt-get install --no-install-recommends -y docker-ce \
@ -28,6 +32,6 @@ RUN /usr/bin/python3 -m pip install --no-cache-dir pip -U \
&& update-alternatives --install /usr/bin/python python /usr/bin/python3 1
RUN KUBE_VERSION=$(sed -n 's/^kube_version: //p' roles/kubespray-defaults/defaults/main.yaml) \
&& curl -LO https://storage.googleapis.com/kubernetes-release/release/$KUBE_VERSION/bin/linux/amd64/kubectl \
&& curl -LO https://storage.googleapis.com/kubernetes-release/release/$KUBE_VERSION/bin/linux/$ARCH/kubectl \
&& chmod a+x kubectl \
&& mv kubectl /usr/local/bin/kubectl

View file

@ -1,5 +1,7 @@
mitogen:
ansible-playbook -c local mitogen.yml -vv
@echo Mitogen support is deprecated.
@echo Please run the following command manually:
@echo ansible-playbook -c local mitogen.yml -vv
clean:
rm -rf dist/
rm *.retry

View file

@ -4,15 +4,23 @@ aliases:
- chadswen
- mirwan
- miouge1
- woopstar
- luckysb
- floryut
- oomichi
- cristicalin
- liupeng0518
- yankay
kubespray-reviewers:
- holmsten
- bozzo
- eppo
- oomichi
- jayonlau
- cristicalin
- liupeng0518
- yankay
kubespray-emeritus_approvers:
- riverzhang
- atoms
- ant31
- woopstar

View file

@ -19,10 +19,10 @@ To deploy the cluster you can use :
#### Usage
```ShellSession
# Install dependencies from ``requirements.txt``
sudo pip3 install -r requirements.txt
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
then run the following steps:
```ShellSession
# Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster
@ -57,10 +57,11 @@ A simple way to ensure you get all the correct version of Ansible is to use the
You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mounts/) to get the inventory and ssh key into the container, like this:
```ShellSession
docker pull quay.io/kubespray/kubespray:v2.16.0
git checkout v2.20.0
docker pull quay.io/kubespray/kubespray:v2.20.0
docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.16.0 bash
quay.io/kubespray/kubespray:v2.20.0 bash
# Inside the container you may now run the kubespray playbooks:
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
```
@ -75,10 +76,11 @@ python -V && pip -V
```
If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/>
Install the necessary requirements
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
then run the following step:
```ShellSession
sudo pip install -r requirements.txt
vagrant up
```
@ -110,49 +112,67 @@ vagrant up
- [Adding/replacing a node](docs/nodes.md)
- [Upgrades basics](docs/upgrades.md)
- [Air-Gap installation](docs/offline-environment.md)
- [NTP](docs/ntp.md)
- [Hardening](docs/hardening.md)
- [Mirror](docs/mirror.md)
- [Roadmap](docs/roadmap.md)
## Supported Linux Distributions
- **Flatcar Container Linux by Kinvolk**
- **Debian** Bullseye, Buster, Jessie, Stretch
- **Ubuntu** 16.04, 18.04, 20.04
- **CentOS/RHEL** 7, [8](docs/centos8.md)
- **Fedora** 33, 34
- **Ubuntu** 16.04, 18.04, 20.04, 22.04
- **CentOS/RHEL** 7, [8, 9](docs/centos.md#centos-8)
- **Fedora** 35, 36
- **Fedora CoreOS** (see [fcos Note](docs/fcos.md))
- **openSUSE** Leap 15.x/Tumbleweed
- **Oracle Linux** 7, [8](docs/centos8.md)
- **Alma Linux** [8](docs/centos8.md)
- **Oracle Linux** 7, [8, 9](docs/centos.md#centos-8)
- **Alma Linux** [8, 9](docs/centos.md#centos-8)
- **Rocky Linux** [8, 9](docs/centos.md#centos-8)
- **Kylin Linux Advanced Server V10** (experimental: see [kylin linux notes](docs/kylinlinux.md))
- **Amazon Linux 2** (experimental: see [amazon linux notes](docs/amazonlinux.md))
- **UOS Linux** (experimental: see [uos linux notes](docs/uoslinux.md))
- **openEuler** (experimental: see [openEuler notes](docs/openeuler.md))
Note: Upstart/SysV init based OS types are not supported.
## Supported Components
- Core
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.21.5
- [etcd](https://github.com/coreos/etcd) v3.4.13
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.25.5
- [etcd](https://github.com/etcd-io/etcd) v3.5.6
- [docker](https://www.docker.com/) v20.10 (see note)
- [containerd](https://containerd.io/) v1.4.9
- [cri-o](http://cri-o.io/) v1.21 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS)
- [containerd](https://containerd.io/) v1.6.14
- [cri-o](http://cri-o.io/) v1.24 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) v0.9.1
- [calico](https://github.com/projectcalico/calico) v3.19.2
- [cni-plugins](https://github.com/containernetworking/plugins) v1.1.1
- [calico](https://github.com/projectcalico/calico) v3.24.5
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
- [cilium](https://github.com/cilium/cilium) v1.9.10
- [flanneld](https://github.com/flannel-io/flannel) v0.14.0
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.7.2
- [kube-router](https://github.com/cloudnativelabs/kube-router) v1.3.0
- [multus](https://github.com/intel/multus-cni) v3.7.2
- [ovn4nfv](https://github.com/opnfv/ovn4nfv-k8s-plugin) v1.1.0
- [cilium](https://github.com/cilium/cilium) v1.12.1
- [flannel](https://github.com/flannel-io/flannel) v0.19.2
- [kube-ovn](https://github.com/alauda/kube-ovn) v1.10.7
- [kube-router](https://github.com/cloudnativelabs/kube-router) v1.5.1
- [multus](https://github.com/intel/multus-cni) v3.8
- [weave](https://github.com/weaveworks/weave) v2.8.1
- [kube-vip](https://github.com/kube-vip/kube-vip) v0.5.5
- Application
- [ambassador](https://github.com/datawire/ambassador): v1.5
- [cert-manager](https://github.com/jetstack/cert-manager) v1.10.1
- [coredns](https://github.com/coredns/coredns) v1.9.3
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.5.1
- [krew](https://github.com/kubernetes-sigs/krew) v0.4.3
- [argocd](https://argoproj.github.io/) v2.4.16
- [helm](https://helm.sh/) v3.9.4
- [metallb](https://metallb.universe.tf/) v0.12.1
- [registry](https://github.com/distribution/distribution) v2.8.1
- Storage Plugin
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.0-k8s1.11
- [rbd-provisioner](https://github.com/kubernetes-incubator/external-storage) v2.1.1-k8s1.11
- [cert-manager](https://github.com/jetstack/cert-manager) v1.0.4
- [coredns](https://github.com/coredns/coredns) v1.8.0
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v1.0.0
- [aws-ebs-csi-plugin](https://github.com/kubernetes-sigs/aws-ebs-csi-driver) v0.5.0
- [azure-csi-plugin](https://github.com/kubernetes-sigs/azuredisk-csi-driver) v1.10.0
- [cinder-csi-plugin](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md) v1.22.0
- [gcp-pd-csi-plugin](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) v1.4.0
- [local-path-provisioner](https://github.com/rancher/local-path-provisioner) v0.0.22
- [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) v2.5.0
## Container Runtime Notes
@ -161,8 +181,8 @@ Note: Upstart/SysV init based OS types are not supported.
## Requirements
- **Minimum required version of Kubernetes is v1.19**
- **Ansible v2.9.x, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands, Ansible 2.10.x is experimentally supported for now**
- **Minimum required version of Kubernetes is v1.23**
- **Ansible v2.11+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/offline-environment.md))
- The target servers are configured to allow **IPv4 forwarding**.
- If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**.
@ -195,8 +215,6 @@ You can choose between 10 network plugins. (default: `calico`, except Vagrant us
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
- [ovn4nfv](docs/ovn4nfv.md): [ovn4nfv-k8s-plugins](https://github.com/opnfv/ovn4nfv-k8s-plugin) is the network controller, OVS agent and CNI server to offer basic SFC and OVN overlay networking.
- [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
(Please refer to `weave` [troubleshooting documentation](https://www.weave.works/docs/net/latest/troubleshooting/)).
@ -217,8 +235,6 @@ See also [Network checker](docs/netcheck.md).
## Ingress Plugins
- [ambassador](docs/ambassador.md): the Ambassador Ingress Controller and API gateway.
- [nginx](https://kubernetes.github.io/ingress-nginx): the NGINX Ingress Controller.
- [metallb](docs/metallb.md): the MetalLB bare-metal service LoadBalancer provider.
@ -234,6 +250,7 @@ See also [Network checker](docs/netcheck.md).
- [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/v4/doc/integrations/ansible.rst)
- [Terraform Contrib](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform)
- [Kubean](https://github.com/kubean-io/kubean)
## CI Tests

View file

@ -2,17 +2,18 @@
The Kubespray Project is released on an as-needed basis. The process is as follows:
1. An issue is proposing a new release with a changelog since the last release
1. An issue is proposing a new release with a changelog since the last release. Please see [a good sample issue](https://github.com/kubernetes-sigs/kubespray/issues/8325)
2. At least one of the [approvers](OWNERS_ALIASES) must approve this release
3. The `kube_version_min_required` variable is set to `n-1`
4. Remove hashes for [EOL versions](https://github.com/kubernetes/sig-release/blob/master/releases/patch-releases.md) of kubernetes from `*_checksums` variables.
5. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes
6. An approver creates a release branch in the form `release-X.Y`
7. The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) and [quay.io/kubespray/vagrant:vX.Y.Z](https://quay.io/repository/kubespray/vagrant) docker images are built and tagged
8. The `KUBESPRAY_VERSION` variable is updated in `.gitlab-ci.yml`
9. The release issue is closed
10. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
11. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
4. Remove hashes for [EOL versions](https://github.com/kubernetes/website/blob/main/content/en/releases/patch-releases.md) of kubernetes from `*_checksums` variables.
5. Create the release note with [Kubernetes Release Notes Generator](https://github.com/kubernetes/release/blob/master/cmd/release-notes/README.md). See the following `Release note creation` section for the details.
6. An approver creates [new release in GitHub](https://github.com/kubernetes-sigs/kubespray/releases/new) using a version and tag name like `vX.Y.Z` and attaching the release notes
7. An approver creates a release branch in the form `release-X.Y`
8. The corresponding version of [quay.io/kubespray/kubespray:vX.Y.Z](https://quay.io/repository/kubespray/kubespray) and [quay.io/kubespray/vagrant:vX.Y.Z](https://quay.io/repository/kubespray/vagrant) container images are built and tagged. See the following `Container image creation` section for the details.
9. The `KUBESPRAY_VERSION` variable is updated in `.gitlab-ci.yml`
10. The release issue is closed
11. An announcement email is sent to `dev@kubernetes.io` with the subject `[ANNOUNCE] Kubespray $VERSION is released`
12. The topic of the #kubespray channel is updated with `vX.Y.Z is released! | ...`
## Major/minor releases and milestones
@ -46,3 +47,37 @@ The Kubespray Project is released on an as-needed basis. The process is as follo
then Kubespray v2.1.0 may be bound to only minor changes to `kube_version`, like v1.5.1
and *any* changes to other components, like etcd v4, or calico 1.2.3.
And Kubespray v3.x.x shall be bound to `kube_version: 2.x.x` respectively.
## Release note creation
You can create a release note with:
```shell
export GITHUB_TOKEN=<your-github-token>
export ORG=kubernetes-sigs
export REPO=kubespray
release-notes --start-sha <The start commit-id> --end-sha <The end commit-id> --dependencies=false --output=/tmp/kubespray-release-note --required-author=""
```
If the release note file(/tmp/kubespray-release-note) contains "### Uncategorized" pull requests, those pull requests don't have a valid kind label(`kind/feature`, etc.).
It is necessary to put a valid label on each pull request and run the above release-notes command again to get a better release note)
## Container image creation
The container image `quay.io/kubespray/kubespray:vX.Y.Z` can be created from Dockerfile of the kubespray root directory:
```shell
cd kubespray/
nerdctl build -t quay.io/kubespray/kubespray:vX.Y.Z .
nerdctl push quay.io/kubespray/kubespray:vX.Y.Z
```
The container image `quay.io/kubespray/vagrant:vX.Y.Z` can be created from build.sh of test-infra/vagrant-docker/:
```shell
cd kubespray/test-infra/vagrant-docker/
./build vX.Y.Z
```
Please note that the above operation requires the permission to push container images into quay.io/kubespray/.
If you don't have the permission, please ask it on the #kubespray-dev channel.

View file

@ -9,5 +9,7 @@
#
# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE
# INSTRUCTIONS AT https://kubernetes.io/security/
atoms
mattymo
floryut
oomichi
cristicalin

26
Vagrantfile vendored
View file

@ -26,9 +26,12 @@ SUPPORTED_OS = {
"centos-bento" => {box: "bento/centos-7.6", user: "vagrant"},
"centos8" => {box: "centos/8", user: "vagrant"},
"centos8-bento" => {box: "bento/centos-8", user: "vagrant"},
"fedora33" => {box: "fedora/33-cloud-base", user: "vagrant"},
"fedora34" => {box: "fedora/34-cloud-base", user: "vagrant"},
"opensuse" => {box: "bento/opensuse-leap-15.2", user: "vagrant"},
"almalinux8" => {box: "almalinux/8", user: "vagrant"},
"almalinux8-bento" => {box: "bento/almalinux-8", user: "vagrant"},
"rockylinux8" => {box: "generic/rocky8", user: "vagrant"},
"fedora35" => {box: "fedora/35-cloud-base", user: "vagrant"},
"fedora36" => {box: "fedora/36-cloud-base", user: "vagrant"},
"opensuse" => {box: "opensuse/Leap-15.4.x86_64", user: "vagrant"},
"opensuse-tumbleweed" => {box: "opensuse/Tumbleweed.x86_64", user: "vagrant"},
"oraclelinux" => {box: "generic/oracle7", user: "vagrant"},
"oraclelinux8" => {box: "generic/oracle8", user: "vagrant"},
@ -53,9 +56,9 @@ $subnet_ipv6 ||= "fd3c:b398:0698:0756"
$os ||= "ubuntu1804"
$network_plugin ||= "flannel"
# Setting multi_networking to true will install Multus: https://github.com/intel/multus-cni
$multi_networking ||= false
$multi_networking ||= "False"
$download_run_once ||= "True"
$download_force_cache ||= "True"
$download_force_cache ||= "False"
# The first three nodes are etcd servers
$etcd_instances ||= $num_instances
# The first two nodes are kube masters
@ -68,9 +71,12 @@ $kube_node_instances_with_disks_size ||= "20G"
$kube_node_instances_with_disks_number ||= 2
$override_disk_size ||= false
$disk_size ||= "20GB"
$local_path_provisioner_enabled ||= false
$local_path_provisioner_enabled ||= "False"
$local_path_provisioner_claim_root ||= "/opt/local-path-provisioner/"
$libvirt_nested ||= false
# boolean or string (e.g. "-vvv")
$ansible_verbosity ||= false
$ansible_tags ||= ENV['VAGRANT_ANSIBLE_TAGS'] || ""
$playbook ||= "cluster.yml"
@ -167,7 +173,7 @@ Vagrant.configure("2") do |config|
# always make /dev/sd{a/b/c} so that CI can ensure that
# virtualbox and libvirt will have the same devices to use for OSDs
(1..$kube_node_instances_with_disks_number).each do |d|
lv.storage :file, :device => "hd#{driverletters[d]}", :path => "disk-#{i}-#{d}-#{DISK_UUID}.disk", :size => $kube_node_instances_with_disks_size, :bus => "ide"
lv.storage :file, :device => "hd#{driverletters[d]}", :path => "disk-#{i}-#{d}-#{DISK_UUID}.disk", :size => $kube_node_instances_with_disks_size, :bus => "scsi"
end
end
end
@ -238,9 +244,11 @@ Vagrant.configure("2") do |config|
}
# Only execute the Ansible provisioner once, when all the machines are up and ready.
# And limit the action to gathering facts, the full playbook is going to be ran by testcases_run.sh
if i == $num_instances
node.vm.provision "ansible" do |ansible|
ansible.playbook = $playbook
ansible.verbose = $ansible_verbosity
$ansible_inventory_path = File.join( $inventory, "hosts.ini")
if File.exist?($ansible_inventory_path)
ansible.inventory_path = $ansible_inventory_path
@ -250,7 +258,9 @@ Vagrant.configure("2") do |config|
ansible.host_key_checking = false
ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "-e ansible_become_pass=vagrant"]
ansible.host_vars = host_vars
#ansible.tags = ['download']
if $ansible_tags != ""
ansible.tags = [$ansible_tags]
end
ansible.groups = {
"etcd" => ["#{$instance_name_prefix}-[1:#{$etcd_instances}]"],
"kube_control_plane" => ["#{$instance_name_prefix}-[1:#{$kube_master_instances}]"],

View file

@ -1,9 +1,8 @@
[ssh_connection]
pipelining=True
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null
ansible_ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null
#control_path = ~/.ssh/ansible-%%r@%%h:%%p
[defaults]
strategy_plugins = plugins/mitogen/ansible_mitogen/plugins/strategy
# https://github.com/ansible/ansible/issues/56930 (to ignore group names with - and .)
force_valid_group_names = ignore
@ -11,11 +10,11 @@ host_key_checking=False
gathering = smart
fact_caching = jsonfile
fact_caching_connection = /tmp
fact_caching_timeout = 7200
fact_caching_timeout = 86400
stdout_callback = default
display_skipped_hosts = no
library = ./library
callback_whitelist = profile_tasks
callbacks_enabled = profile_tasks,ara_default
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles
deprecation_warnings=False
inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds, .gpg

View file

@ -3,32 +3,20 @@
gather_facts: false
become: no
vars:
minimal_ansible_version: 2.9.0
minimal_ansible_version_2_10: 2.10.11
maximal_ansible_version: 2.11.0
minimal_ansible_version: 2.11.0
maximal_ansible_version: 2.14.0
ansible_connection: local
tags: always
tasks:
- name: "Check {{ minimal_ansible_version }} <= Ansible version < {{ maximal_ansible_version }}"
assert:
msg: "Ansible must be between {{ minimal_ansible_version }} and {{ maximal_ansible_version }}"
msg: "Ansible must be between {{ minimal_ansible_version }} and {{ maximal_ansible_version }} exclusive"
that:
- ansible_version.string is version(minimal_ansible_version, ">=")
- ansible_version.string is version(maximal_ansible_version, "<")
tags:
- check
- name: "Check Ansible version > {{ minimal_ansible_version_2_10 }} when using ansible 2.10"
assert:
msg: "When using Ansible 2.10, the minimum supported version is {{ minimal_ansible_version_2_10 }}"
that:
- ansible_version.string is version(minimal_ansible_version_2_10, ">=")
- ansible_version.string is version(maximal_ansible_version, "<")
when:
- ansible_version.string is version('2.10.0', ">=")
tags:
- check
- name: "Check that python netaddr is installed"
assert:
msg: "Python netaddr is not present"

View file

@ -32,10 +32,10 @@
roles:
- { role: kubespray-defaults }
- { role: kubernetes/preinstall, tags: preinstall }
- { role: "container-engine", tags: "container-engine", when: deploy_container_engine|default(true) }
- { role: "container-engine", tags: "container-engine", when: deploy_container_engine }
- { role: download, tags: download, when: "not skip_downloads" }
- hosts: etcd
- hosts: etcd:kube_control_plane
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"
@ -46,7 +46,7 @@
vars:
etcd_cluster_setup: true
etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}"
when: not etcd_kubeadm_enabled| default(false)
when: etcd_deployment_type != "kubeadm"
- hosts: k8s_cluster
gather_facts: False
@ -59,7 +59,10 @@
vars:
etcd_cluster_setup: false
etcd_events_cluster_setup: false
when: not etcd_kubeadm_enabled| default(false)
when:
- etcd_deployment_type != "kubeadm"
- kube_network_plugin in ["calico", "flannel", "canal", "cilium"] or cilium_deploy_additionally | default(false) | bool
- kube_network_plugin != "calico" or calico_datastore == "etcd"
- hosts: k8s_cluster
gather_facts: False
@ -118,7 +121,8 @@
- { role: kubernetes-apps/external_provisioner, tags: external-provisioner }
- { role: kubernetes-apps, tags: apps }
- hosts: k8s_cluster
- name: Apply resolv.conf changes now that cluster DNS is up
hosts: k8s_cluster
gather_facts: False
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
environment: "{{ proxy_disable_env }}"

View file

@ -47,6 +47,10 @@ If you need to delete all resources from a resource group, simply call:
**WARNING** this really deletes everything from your resource group, including everything that was later created by you!
## Installing Ansible and the dependencies
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
## Generating an inventory for kubespray
After you have applied the templates, you can generate an inventory with this call:
@ -59,6 +63,5 @@ It will create the file ./inventory which can then be used with kubespray, e.g.:
```shell
cd kubespray-root-dir
sudo pip3 install -r requirements.txt
ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/sample/group_vars/all/all.yml" cluster.yml
```

View file

@ -17,7 +17,7 @@ pass_or_fail() {
test_distro() {
local distro=${1:?};shift
local extra="${*:-}"
local prefix="$distro[${extra}]}"
local prefix="${distro[${extra}]}"
ansible-playbook -i hosts dind-cluster.yaml -e node_distro=$distro
pass_or_fail "$prefix: dind-nodes" || return 1
(cd ../..
@ -71,15 +71,15 @@ for spec in ${SPECS}; do
echo "Loading file=${spec} ..."
. ${spec} || continue
: ${DISTROS:?} || continue
echo "DISTROS=${DISTROS[@]}"
echo "DISTROS:" "${DISTROS[@]}"
echo "EXTRAS->"
printf " %s\n" "${EXTRAS[@]}"
let n=1
for distro in ${DISTROS[@]}; do
for distro in "${DISTROS[@]}"; do
for extra in "${EXTRAS[@]:-NULL}"; do
# Magic value to let this for run once:
[[ ${extra} == NULL ]] && unset extra
docker rm -f ${NODES[@]}
docker rm -f "${NODES[@]}"
printf -v file_out "%s/%s-%02d.out" ${OUTPUT_DIR} ${spec} $((n++))
{
info "${distro}[${extra}] START: file_out=${file_out}"

View file

@ -83,11 +83,15 @@ class KubesprayInventory(object):
self.config_file = config_file
self.yaml_config = {}
loadPreviousConfig = False
printHostnames = False
# See whether there are any commands to process
if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS:
if changed_hosts[0] == "add":
loadPreviousConfig = True
changed_hosts = changed_hosts[1:]
elif changed_hosts[0] == "print_hostnames":
loadPreviousConfig = True
printHostnames = True
else:
self.parse_command(changed_hosts[0], changed_hosts[1:])
sys.exit(0)
@ -105,6 +109,10 @@ class KubesprayInventory(object):
print(e)
sys.exit(1)
if printHostnames:
self.print_hostnames()
sys.exit(0)
self.ensure_required_groups(ROLES)
if changed_hosts:

View file

@ -13,6 +13,7 @@
# under the License.
import inventory
from io import StringIO
import unittest
from unittest import mock
@ -26,6 +27,28 @@ if path not in sys.path:
import inventory # noqa
class TestInventoryPrintHostnames(unittest.TestCase):
@mock.patch('ruamel.yaml.YAML.load')
def test_print_hostnames(self, load_mock):
mock_io = mock.mock_open(read_data='')
load_mock.return_value = OrderedDict({'all': {'hosts': {
'node1': {'ansible_host': '10.90.0.2',
'ip': '10.90.0.2',
'access_ip': '10.90.0.2'},
'node2': {'ansible_host': '10.90.0.3',
'ip': '10.90.0.3',
'access_ip': '10.90.0.3'}}}})
with mock.patch('builtins.open', mock_io):
with self.assertRaises(SystemExit) as cm:
with mock.patch('sys.stdout', new_callable=StringIO) as stdout:
inventory.KubesprayInventory(
changed_hosts=["print_hostnames"],
config_file="file")
self.assertEqual("node1 node2\n", stdout.getvalue())
self.assertEqual(cm.exception.code, 0)
class TestInventory(unittest.TestCase):
@mock.patch('inventory.sys')
def setUp(self, sys_mock):

View file

@ -28,7 +28,7 @@
sysctl:
name: net.ipv4.ip_forward
value: 1
sysctl_file: /etc/sysctl.d/ipv4-ip_forward.conf
sysctl_file: "{{ sysctl_file_path }}"
state: present
reload: yes
@ -37,7 +37,7 @@
name: "{{ item }}"
state: present
value: 0
sysctl_file: /etc/sysctl.d/bridge-nf-call.conf
sysctl_file: "{{ sysctl_file_path }}"
reload: yes
with_items:
- net.bridge.bridge-nf-call-arptables

View file

@ -5,8 +5,8 @@
- hosts: localhost
strategy: linear
vars:
mitogen_version: 0.3.0rc1
mitogen_url: https://github.com/dw/mitogen/archive/v{{ mitogen_version }}.tar.gz
mitogen_version: 0.3.2
mitogen_url: https://github.com/mitogen-hq/mitogen/archive/refs/tags/v{{ mitogen_version }}.tar.gz
ansible_connection: local
tasks:
- name: Create mitogen plugin dir
@ -38,7 +38,12 @@
- name: add strategy to ansible.cfg
ini_file:
path: ansible.cfg
section: defaults
option: strategy
value: mitogen_linear
mode: 0644
section: "{{ item.section | d('defaults') }}"
option: "{{ item.option }}"
value: "{{ item.value }}"
with_items:
- option: strategy
value: mitogen_linear
- option: strategy_plugins
value: plugins/mitogen/ansible_mitogen/plugins/strategy

View file

@ -14,12 +14,16 @@ This role performs basic installation and setup of Gluster, but it does not conf
Available variables are listed below, along with default values (see `defaults/main.yml`):
glusterfs_default_release: ""
```yaml
glusterfs_default_release: ""
```
You can specify a `default_release` for apt on Debian/Ubuntu by overriding this variable. This is helpful if you need a different package or version for the main GlusterFS packages (e.g. GlusterFS 3.5.x instead of 3.2.x with the `wheezy-backports` default release on Debian Wheezy).
glusterfs_ppa_use: yes
glusterfs_ppa_version: "3.5"
```yaml
glusterfs_ppa_use: yes
glusterfs_ppa_version: "3.5"
```
For Ubuntu, specify whether to use the official Gluster PPA, and which version of the PPA to use. See Gluster's [Getting Started Guide](https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/) for more info.
@ -29,9 +33,11 @@ None.
## Example Playbook
```yaml
- hosts: server
roles:
- geerlingguy.glusterfs
```
For a real-world use example, read through [Simple GlusterFS Setup with Ansible](http://www.jeffgeerling.com/blog/simple-glusterfs-setup-ansible), a blog post by this role's author, which is included in Chapter 8 of [Ansible for DevOps](https://www.ansiblefordevops.com/).

View file

@ -3,6 +3,7 @@
template:
src: "{{ item.file }}"
dest: "{{ kube_config_dir }}/{{ item.dest }}"
mode: 0644
with_items:
- { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json}
- { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml}

View file

@ -2,6 +2,13 @@ all:
vars:
heketi_admin_key: "11elfeinhundertundelf"
heketi_user_key: "!!einseinseins"
glusterfs_daemonset:
readiness_probe:
timeout_seconds: 3
initial_delay_seconds: 3
liveness_probe:
timeout_seconds: 3
initial_delay_seconds: 10
children:
k8s_cluster:
vars:

View file

@ -5,7 +5,7 @@
changed_when: false
- name: "Kubernetes Apps | Deploy cluster role binding."
when: "clusterrolebinding_state.stdout | length > 0"
when: "clusterrolebinding_state.stdout | length == 0"
command: "{{ bin_dir }}/kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account"
- name: Get clusterrolebindings again
@ -31,7 +31,7 @@
mode: 0644
- name: "Deploy Heketi config secret"
when: "secret_state.stdout | length > 0"
when: "secret_state.stdout | length == 0"
command: "{{ bin_dir }}/kubectl create secret generic heketi-config-secret --from-file={{ kube_config_dir }}/heketi.json"
- name: Get the heketi-config-secret secret again
@ -41,5 +41,5 @@
- name: Make sure the heketi-config-secret secret exists now
assert:
that: "secret_state.stdout != \"\""
that: "secret_state.stdout | length > 0"
msg: "Heketi config secret is not present."

View file

@ -73,8 +73,8 @@
"privileged": true
},
"readinessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 3,
"timeoutSeconds": {{ glusterfs_daemonset.readiness_probe.timeout_seconds }},
"initialDelaySeconds": {{ glusterfs_daemonset.readiness_probe.initial_delay_seconds }},
"exec": {
"command": [
"/bin/bash",
@ -84,8 +84,8 @@
}
},
"livenessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 10,
"timeoutSeconds": {{ glusterfs_daemonset.liveness_probe.timeout_seconds }},
"initialDelaySeconds": {{ glusterfs_daemonset.liveness_probe.initial_delay_seconds }},
"exec": {
"command": [
"/bin/bash",

View file

@ -9,7 +9,8 @@ This script has two features:
(2) Deploy local container registry and register the container images to the registry.
Step(1) should be done online site as a preparation, then we bring the gotten images
to the target offline environment.
to the target offline environment. if images are from a private registry,
you need to set `PRIVATE_REGISTRY` environment variable.
Then we will run step(2) for registering the images to local registry.
Step(1) can be operated with:
@ -28,16 +29,37 @@ manage-offline-container-images.sh register
This script generates the list of downloaded files and the list of container images by `roles/download/defaults/main.yml` file.
Run this script will generates three files, all downloaded files url in files.list, all container images in images.list, all component version in generate.sh.
Run this script will execute `generate_list.yml` playbook in kubespray root directory and generate four files,
all downloaded files url in files.list, all container images in images.list, jinja2 templates in *.template.
```shell
bash generate_list.sh
./generate_list.sh
tree temp
temp
├── files.list
├── generate.sh
└── images.list
0 directories, 3 files
├── files.list.template
├── images.list
└── images.list.template
0 directories, 5 files
```
In some cases you may want to update some component version, you can edit `generate.sh` file, then run `bash generate.sh | grep 'https' > files.list` to update file.list or run `bash generate.sh | grep -v 'https'> images.list` to update images.list.
In some cases you may want to update some component version, you can declare version variables in ansible inventory file or group_vars,
then run `./generate_list.sh -i [inventory_file]` to update file.list and images.list.
## manage-offline-files.sh
This script will download all files according to `temp/files.list` and run nginx container to provide offline file download.
Step(1) generate `files.list`
```shell
./generate_list.sh
```
Step(2) download files and run nginx container
```shell
./manage-offline-files.sh
```
when nginx container is running, it can be accessed through <http://127.0.0.1:8080/>.

56
contrib/offline/generate_list.sh Normal file → Executable file
View file

@ -5,53 +5,29 @@ CURRENT_DIR=$(cd $(dirname $0); pwd)
TEMP_DIR="${CURRENT_DIR}/temp"
REPO_ROOT_DIR="${CURRENT_DIR%/contrib/offline}"
: ${IMAGE_ARCH:="amd64"}
: ${ANSIBLE_SYSTEM:="linux"}
: ${ANSIBLE_ARCHITECTURE:="x86_64"}
: ${DOWNLOAD_YML:="roles/download/defaults/main.yml"}
: ${KUBE_VERSION_YAML:="roles/kubespray-defaults/defaults/main.yaml"}
mkdir -p ${TEMP_DIR}
# ARCH used in convert {%- if image_arch != 'amd64' -%}-{{ image_arch }}{%- endif -%} to {{arch}}
if [ "${IMAGE_ARCH}" != "amd64" ]; then ARCH="${IMAGE_ARCH}"; fi
cat > ${TEMP_DIR}/generate.sh << EOF
arch=${ARCH}
image_arch=${IMAGE_ARCH}
ansible_system=${ANSIBLE_SYSTEM}
ansible_architecture=${ANSIBLE_ARCHITECTURE}
EOF
# generate all component version by $DOWNLOAD_YML
grep 'kube_version:' ${REPO_ROOT_DIR}/${KUBE_VERSION_YAML} \
| sed 's/: /=/g' >> ${TEMP_DIR}/generate.sh
grep '_version:' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
| sed 's/: /=/g;s/{{/${/g;s/}}/}/g' | tr -d ' ' >> ${TEMP_DIR}/generate.sh
sed -i 's/kube_major_version=.*/kube_major_version=${kube_version%.*}/g' ${TEMP_DIR}/generate.sh
sed -i 's/crictl_version=.*/crictl_version=${kube_version%.*}.0/g' ${TEMP_DIR}/generate.sh
# generate all download files url
# generate all download files url template
grep 'download_url:' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
| sed 's/: /=/g;s/ //g;s/{{/${/g;s/}}/}/g;s/|lower//g;s/^.*_url=/echo /g' >> ${TEMP_DIR}/generate.sh
| sed 's/^.*_url: //g;s/\"//g' > ${TEMP_DIR}/files.list.template
# generate all images list
grep -E '_repo:|_tag:' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
| sed "s#{%- if image_arch != 'amd64' -%}-{{ image_arch }}{%- endif -%}#{{arch}}#g" \
| sed 's/: /=/g;s/{{/${/g;s/}}/}/g' | tr -d ' ' >> ${TEMP_DIR}/generate.sh
# generate all images list template
sed -n '/^downloads:/,/download_defaults:/p' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
| sed -n "s/repo: //p;s/tag: //p" | tr -d ' ' | sed 's/{{/${/g;s/}}/}/g' \
| sed 'N;s#\n# #g' | tr ' ' ':' | sed 's/^/echo /g' >> ${TEMP_DIR}/generate.sh
| sed -n "s/repo: //p;s/tag: //p" | tr -d ' ' \
| sed 'N;s#\n# #g' | tr ' ' ':' | sed 's/\"//g' > ${TEMP_DIR}/images.list.template
# special handling for https://github.com/kubernetes-sigs/kubespray/pull/7570
sed -i 's#^coredns_image_repo=.*#coredns_image_repo=${kube_image_repo}$(if printf "%s\\n%s\\n" v1.21 ${kube_version%.*} | sort --check=quiet --version-sort; then echo -n /coredns/coredns;else echo -n /coredns; fi)#' ${TEMP_DIR}/generate.sh
sed -i 's#^coredns_image_tag=.*#coredns_image_tag=$(if printf "%s\\n%s\\n" v1.21 ${kube_version%.*} | sort --check=quiet --version-sort; then echo -n ${coredns_version};else echo -n ${coredns_version/v/}; fi)#' ${TEMP_DIR}/generate.sh
# add kube-* images to images list
# add kube-* images to images list template
# Those container images are downloaded by kubeadm, then roles/download/defaults/main.yml
# doesn't contain those images. That is reason why here needs to put those images into the
# list separately.
KUBE_IMAGES="kube-apiserver kube-controller-manager kube-scheduler kube-proxy"
echo "${KUBE_IMAGES}" | tr ' ' '\n' | xargs -L1 -I {} \
echo 'echo ${kube_image_repo}/{}:${kube_version}' >> ${TEMP_DIR}/generate.sh
for i in $KUBE_IMAGES; do
echo "{{ kube_image_repo }}/$i:{{ kube_version }}" >> ${TEMP_DIR}/images.list.template
done
# print files.list and images.list
bash ${TEMP_DIR}/generate.sh | grep 'https' | sort > ${TEMP_DIR}/files.list
bash ${TEMP_DIR}/generate.sh | grep -v 'https' | sort > ${TEMP_DIR}/images.list
# run ansible to expand templates
/bin/cp ${CURRENT_DIR}/generate_list.yml ${REPO_ROOT_DIR}
(cd ${REPO_ROOT_DIR} && ansible-playbook $* generate_list.yml && /bin/rm generate_list.yml) || exit 1

View file

@ -0,0 +1,19 @@
---
- hosts: localhost
become: no
roles:
# Just load default variables from roles.
- role: kubespray-defaults
when: false
- role: download
when: false
tasks:
# Generate files.list and images.list files from templates.
- template:
src: ./contrib/offline/temp/{{ item }}.list.template
dest: ./contrib/offline/temp/{{ item }}.list
with_items:
- files
- images

View file

@ -15,7 +15,7 @@ function create_container_image_tar() {
IMAGES=$(kubectl describe pods --all-namespaces | grep " Image:" | awk '{print $2}' | sort | uniq)
# NOTE: etcd and pause cannot be seen as pods.
# The pause image is used for --pod-infra-container-image option of kubelet.
EXT_IMAGES=$(kubectl cluster-info dump | egrep "quay.io/coreos/etcd:|k8s.gcr.io/pause:" | sed s@\"@@g)
EXT_IMAGES=$(kubectl cluster-info dump | egrep "quay.io/coreos/etcd:|registry.k8s.io/pause:" | sed s@\"@@g)
IMAGES="${IMAGES} ${EXT_IMAGES}"
rm -f ${IMAGE_TAR_FILE}
@ -46,15 +46,16 @@ function create_container_image_tar() {
# NOTE: Here removes the following repo parts from each image
# so that these parts will be replaced with Kubespray.
# - kube_image_repo: "k8s.gcr.io"
# - kube_image_repo: "registry.k8s.io"
# - gcr_image_repo: "gcr.io"
# - docker_image_repo: "docker.io"
# - quay_image_repo: "quay.io"
FIRST_PART=$(echo ${image} | awk -F"/" '{print $1}')
if [ "${FIRST_PART}" = "k8s.gcr.io" ] ||
if [ "${FIRST_PART}" = "registry.k8s.io" ] ||
[ "${FIRST_PART}" = "gcr.io" ] ||
[ "${FIRST_PART}" = "docker.io" ] ||
[ "${FIRST_PART}" = "quay.io" ]; then
[ "${FIRST_PART}" = "quay.io" ] ||
[ "${FIRST_PART}" = "${PRIVATE_REGISTRY}" ]; then
image=$(echo ${image} | sed s@"${FIRST_PART}/"@@)
fi
echo "${FILE_NAME} ${image}" >> ${IMAGE_LIST}
@ -152,7 +153,8 @@ else
echo "(2) Deploy local container registry and register the container images to the registry."
echo ""
echo "Step(1) should be done online site as a preparation, then we bring"
echo "the gotten images to the target offline environment."
echo "the gotten images to the target offline environment. if images are from"
echo "a private registry, you need to set PRIVATE_REGISTRY environment variable."
echo "Then we will run step(2) for registering the images to local registry."
echo ""
echo "${IMAGE_TAR_FILE} is created to contain your container images."

View file

@ -0,0 +1,44 @@
#!/bin/bash
CURRENT_DIR=$( dirname "$(readlink -f "$0")" )
OFFLINE_FILES_DIR_NAME="offline-files"
OFFLINE_FILES_DIR="${CURRENT_DIR}/${OFFLINE_FILES_DIR_NAME}"
OFFLINE_FILES_ARCHIVE="${CURRENT_DIR}/offline-files.tar.gz"
FILES_LIST=${FILES_LIST:-"${CURRENT_DIR}/temp/files.list"}
NGINX_PORT=8080
# download files
if [ ! -f "${FILES_LIST}" ]; then
echo "${FILES_LIST} should exist, run ./generate_list.sh first."
exit 1
fi
rm -rf "${OFFLINE_FILES_DIR}"
rm "${OFFLINE_FILES_ARCHIVE}"
mkdir "${OFFLINE_FILES_DIR}"
wget -x -P "${OFFLINE_FILES_DIR}" -i "${FILES_LIST}"
tar -czvf "${OFFLINE_FILES_ARCHIVE}" "${OFFLINE_FILES_DIR_NAME}"
[ -n "$NO_HTTP_SERVER" ] && echo "skip to run nginx" && exit 0
# run nginx container server
if command -v nerdctl 1>/dev/null 2>&1; then
runtime="nerdctl"
elif command -v podman 1>/dev/null 2>&1; then
runtime="podman"
elif command -v docker 1>/dev/null 2>&1; then
runtime="docker"
else
echo "No supported container runtime found"
exit 1
fi
sudo "${runtime}" container inspect nginx >/dev/null 2>&1
if [ $? -ne 0 ]; then
sudo "${runtime}" run \
--restart=always -d -p ${NGINX_PORT}:80 \
--volume "${OFFLINE_FILES_DIR}:/usr/share/nginx/html/download" \
--volume "$(pwd)"/nginx.conf:/etc/nginx/nginx.conf \
--name nginx nginx:alpine
fi

View file

@ -0,0 +1,39 @@
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
include /etc/nginx/default.d/*.conf;
location / {
root /usr/share/nginx/html/download;
autoindex on;
autoindex_exact_size off;
autoindex_localtime on;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}

View file

@ -20,4 +20,4 @@
"'ufw.service' in services"
when:
- disable_service_firewall
- disable_service_firewall is defined and disable_service_firewall

View file

@ -36,8 +36,7 @@ terraform apply -var-file=credentials.tfvars
```
- Terraform automatically creates an Ansible Inventory file called `hosts` with the created infrastructure in the directory `inventory`
- Ansible will automatically generate an ssh config file for your bastion hosts. To connect to hosts with ssh using bastion host use generated ssh-bastion.conf.
Ansible automatically detects bastion and changes ssh_args
- Ansible will automatically generate an ssh config file for your bastion hosts. To connect to hosts with ssh using bastion host use generated `ssh-bastion.conf`. Ansible automatically detects bastion and changes `ssh_args`
```commandline
ssh -F ./ssh-bastion.conf user@$ip

View file

@ -20,20 +20,20 @@ module "aws-vpc" {
aws_cluster_name = var.aws_cluster_name
aws_vpc_cidr_block = var.aws_vpc_cidr_block
aws_avail_zones = slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names))
aws_avail_zones = data.aws_availability_zones.available.names
aws_cidr_subnets_private = var.aws_cidr_subnets_private
aws_cidr_subnets_public = var.aws_cidr_subnets_public
default_tags = var.default_tags
}
module "aws-elb" {
source = "./modules/elb"
module "aws-nlb" {
source = "./modules/nlb"
aws_cluster_name = var.aws_cluster_name
aws_vpc_id = module.aws-vpc.aws_vpc_id
aws_avail_zones = slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names))
aws_avail_zones = data.aws_availability_zones.available.names
aws_subnet_ids_public = module.aws-vpc.aws_subnet_ids_public
aws_elb_api_port = var.aws_elb_api_port
aws_nlb_api_port = var.aws_nlb_api_port
k8s_secure_api_port = var.k8s_secure_api_port
default_tags = var.default_tags
}
@ -54,7 +54,6 @@ resource "aws_instance" "bastion-server" {
instance_type = var.aws_bastion_size
count = var.aws_bastion_num
associate_public_ip_address = true
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names)), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_public, count.index)
vpc_security_group_ids = module.aws-vpc.aws_security_group
@ -79,8 +78,7 @@ resource "aws_instance" "k8s-master" {
count = var.aws_kube_master_num
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names)), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
vpc_security_group_ids = module.aws-vpc.aws_security_group
@ -98,10 +96,10 @@ resource "aws_instance" "k8s-master" {
}))
}
resource "aws_elb_attachment" "attach_master_nodes" {
count = var.aws_kube_master_num
elb = module.aws-elb.aws_elb_api_id
instance = element(aws_instance.k8s-master.*.id, count.index)
resource "aws_lb_target_group_attachment" "tg-attach_master_nodes" {
count = var.aws_kube_master_num
target_group_arn = module.aws-nlb.aws_nlb_api_tg_arn
target_id = element(aws_instance.k8s-master.*.private_ip, count.index)
}
resource "aws_instance" "k8s-etcd" {
@ -110,8 +108,7 @@ resource "aws_instance" "k8s-etcd" {
count = var.aws_etcd_num
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names)), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
vpc_security_group_ids = module.aws-vpc.aws_security_group
@ -134,8 +131,7 @@ resource "aws_instance" "k8s-worker" {
count = var.aws_kube_worker_num
availability_zone = element(slice(data.aws_availability_zones.available.names, 0, length(var.aws_cidr_subnets_public) <= length(data.aws_availability_zones.available.names) ? length(var.aws_cidr_subnets_public) : length(data.aws_availability_zones.available.names)), count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
vpc_security_group_ids = module.aws-vpc.aws_security_group
@ -168,7 +164,7 @@ data "template_file" "inventory" {
list_node = join("\n", aws_instance.k8s-worker.*.private_dns)
connection_strings_etcd = join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-etcd.*.private_dns, aws_instance.k8s-etcd.*.private_ip))
list_etcd = join("\n", ((var.aws_etcd_num > 0) ? (aws_instance.k8s-etcd.*.private_dns) : (aws_instance.k8s-master.*.private_dns)))
elb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-elb.aws_elb_api_fqdn}\""
nlb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-nlb.aws_nlb_api_fqdn}\""
}
}

View file

@ -1,57 +0,0 @@
resource "aws_security_group" "aws-elb" {
name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
vpc_id = var.aws_vpc_id
tags = merge(var.default_tags, tomap({
Name = "kubernetes-${var.aws_cluster_name}-securitygroup-elb"
}))
}
resource "aws_security_group_rule" "aws-allow-api-access" {
type = "ingress"
from_port = var.aws_elb_api_port
to_port = var.k8s_secure_api_port
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = aws_security_group.aws-elb.id
}
resource "aws_security_group_rule" "aws-allow-api-egress" {
type = "egress"
from_port = 0
to_port = 65535
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = aws_security_group.aws-elb.id
}
# Create a new AWS ELB for K8S API
resource "aws_elb" "aws-elb-api" {
name = "kubernetes-elb-${var.aws_cluster_name}"
subnets = var.aws_subnet_ids_public
security_groups = [aws_security_group.aws-elb.id]
listener {
instance_port = var.k8s_secure_api_port
instance_protocol = "tcp"
lb_port = var.aws_elb_api_port
lb_protocol = "tcp"
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = "HTTPS:${var.k8s_secure_api_port}/healthz"
interval = 30
}
cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400
tags = merge(var.default_tags, tomap({
Name = "kubernetes-${var.aws_cluster_name}-elb-api"
}))
}

View file

@ -1,7 +0,0 @@
output "aws_elb_api_id" {
value = aws_elb.aws-elb-api.id
}
output "aws_elb_api_fqdn" {
value = aws_elb.aws-elb-api.dns_name
}

View file

@ -0,0 +1,41 @@
# Create a new AWS NLB for K8S API
resource "aws_lb" "aws-nlb-api" {
name = "kubernetes-nlb-${var.aws_cluster_name}"
load_balancer_type = "network"
subnets = length(var.aws_subnet_ids_public) <= length(var.aws_avail_zones) ? var.aws_subnet_ids_public : slice(var.aws_subnet_ids_public, 0, length(var.aws_avail_zones))
idle_timeout = 400
enable_cross_zone_load_balancing = true
tags = merge(var.default_tags, tomap({
Name = "kubernetes-${var.aws_cluster_name}-nlb-api"
}))
}
# Create a new AWS NLB Instance Target Group
resource "aws_lb_target_group" "aws-nlb-api-tg" {
name = "kubernetes-nlb-tg-${var.aws_cluster_name}"
port = var.k8s_secure_api_port
protocol = "TCP"
target_type = "ip"
vpc_id = var.aws_vpc_id
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
interval = 30
protocol = "HTTPS"
path = "/healthz"
}
}
# Create a new AWS NLB Listener listen to target group
resource "aws_lb_listener" "aws-nlb-api-listener" {
load_balancer_arn = aws_lb.aws-nlb-api.arn
port = var.aws_nlb_api_port
protocol = "TCP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.aws-nlb-api-tg.arn
}
}

View file

@ -0,0 +1,11 @@
output "aws_nlb_api_id" {
value = aws_lb.aws-nlb-api.id
}
output "aws_nlb_api_fqdn" {
value = aws_lb.aws-nlb-api.dns_name
}
output "aws_nlb_api_tg_arn" {
value = aws_lb_target_group.aws-nlb-api-tg.arn
}

View file

@ -6,8 +6,8 @@ variable "aws_vpc_id" {
description = "AWS VPC ID"
}
variable "aws_elb_api_port" {
description = "Port for AWS ELB"
variable "aws_nlb_api_port" {
description = "Port for AWS NLB"
}
variable "k8s_secure_api_port" {

View file

@ -25,13 +25,14 @@ resource "aws_internet_gateway" "cluster-vpc-internetgw" {
resource "aws_subnet" "cluster-vpc-subnets-public" {
vpc_id = aws_vpc.cluster-vpc.id
count = length(var.aws_avail_zones)
availability_zone = element(var.aws_avail_zones, count.index)
count = length(var.aws_cidr_subnets_public)
availability_zone = element(var.aws_avail_zones, count.index % length(var.aws_avail_zones))
cidr_block = element(var.aws_cidr_subnets_public, count.index)
tags = merge(var.default_tags, tomap({
Name = "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-public"
"kubernetes.io/cluster/${var.aws_cluster_name}" = "member"
"kubernetes.io/cluster/${var.aws_cluster_name}" = "shared"
"kubernetes.io/role/elb" = "1"
}))
}
@ -43,12 +44,14 @@ resource "aws_nat_gateway" "cluster-nat-gateway" {
resource "aws_subnet" "cluster-vpc-subnets-private" {
vpc_id = aws_vpc.cluster-vpc.id
count = length(var.aws_avail_zones)
availability_zone = element(var.aws_avail_zones, count.index)
count = length(var.aws_cidr_subnets_private)
availability_zone = element(var.aws_avail_zones, count.index % length(var.aws_avail_zones))
cidr_block = element(var.aws_cidr_subnets_private, count.index)
tags = merge(var.default_tags, tomap({
Name = "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-private"
"kubernetes.io/cluster/${var.aws_cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}))
}

View file

@ -14,8 +14,8 @@ output "etcd" {
value = join("\n", ((var.aws_etcd_num > 0) ? (aws_instance.k8s-etcd.*.private_ip) : (aws_instance.k8s-master.*.private_ip)))
}
output "aws_elb_api_fqdn" {
value = "${module.aws-elb.aws_elb_api_fqdn}:${var.aws_elb_api_port}"
output "aws_nlb_api_fqdn" {
value = "${module.aws-nlb.aws_nlb_api_fqdn}:${var.aws_nlb_api_port}"
}
output "inventory" {

View file

@ -33,9 +33,9 @@ aws_kube_worker_size = "t2.medium"
aws_kube_worker_disk_size = 50
#Settings AWS ELB
#Settings AWS NLB
aws_elb_api_port = 6443
aws_nlb_api_port = 6443
k8s_secure_api_port = 6443

View file

@ -24,4 +24,4 @@ kube_control_plane
calico_rr
[k8s_cluster:vars]
${elb_api_fqdn}
${nlb_api_fqdn}

View file

@ -32,7 +32,7 @@ aws_kube_worker_size = "t3.medium"
aws_kube_worker_disk_size = 50
#Settings AWS ELB
aws_elb_api_port = 6443
aws_nlb_api_port = 6443
k8s_secure_api_port = 6443
default_tags = {

View file

@ -25,7 +25,7 @@ aws_kube_worker_size = "t3.medium"
aws_kube_worker_disk_size = 50
#Settings AWS ELB
aws_elb_api_port = 6443
aws_nlb_api_port = 6443
k8s_secure_api_port = 6443
default_tags = { }

View file

@ -104,11 +104,11 @@ variable "aws_kube_worker_size" {
}
/*
* AWS ELB Settings
* AWS NLB Settings
*
*/
variable "aws_elb_api_port" {
description = "Port for AWS ELB"
variable "aws_nlb_api_port" {
description = "Port for AWS NLB"
}
variable "k8s_secure_api_port" {

View file

@ -31,9 +31,7 @@ The setup looks like following
## Requirements
* Terraform 0.13.0 or newer
*0.12 also works if you modify the provider block to include version and remove all `versions.tf` files*
* Terraform 0.13.0 or newer (0.12 also works if you modify the provider block to include version and remove all `versions.tf` files)
## Quickstart

View file

@ -3,8 +3,8 @@ provider "exoscale" {}
module "kubernetes" {
source = "./modules/kubernetes-cluster"
prefix = var.prefix
prefix = var.prefix
zone = var.zone
machines = var.machines
ssh_public_keys = var.ssh_public_keys

View file

@ -74,14 +74,23 @@ ansible-playbook -i contrib/terraform/gcs/inventory.ini cluster.yml -b -v
* `ssh_whitelist`: List of IP ranges (CIDR) that will be allowed to ssh to the nodes
* `api_server_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the API server
* `nodeport_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the kubernetes nodes on port 30000-32767 (kubernetes nodeports)
* `ingress_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to ingress on ports 80 and 443
### Optional
* `prefix`: Prefix to use for all resources, required to be unique for all clusters in the same project *(Defaults to `default`)*
* `master_sa_email`: Service account email to use for the master nodes *(Defaults to `""`, auto generate one)*
* `master_sa_scopes`: Service account email to use for the master nodes *(Defaults to `["https://www.googleapis.com/auth/cloud-platform"]`)*
* `master_sa_email`: Service account email to use for the control plane nodes *(Defaults to `""`, auto generate one)*
* `master_sa_scopes`: Service account email to use for the control plane nodes *(Defaults to `["https://www.googleapis.com/auth/cloud-platform"]`)*
* `master_preemptible`: Enable [preemptible](https://cloud.google.com/compute/docs/instances/preemptible)
for the control plane nodes *(Defaults to `false`)*
* `master_additional_disk_type`: [Disk type](https://cloud.google.com/compute/docs/disks/#disk-types)
for extra disks added on the control plane nodes *(Defaults to `"pd-ssd"`)*
* `worker_sa_email`: Service account email to use for the worker nodes *(Defaults to `""`, auto generate one)*
* `worker_sa_scopes`: Service account email to use for the worker nodes *(Defaults to `["https://www.googleapis.com/auth/cloud-platform"]`)*
* `worker_preemptible`: Enable [preemptible](https://cloud.google.com/compute/docs/instances/preemptible)
for the worker nodes *(Defaults to `false`)*
* `worker_additional_disk_type`: [Disk type](https://cloud.google.com/compute/docs/disks/#disk-types)
for extra disks added on the worker nodes *(Defaults to `"pd-ssd"`)*
An example variables file can be found `tfvars.json`

View file

@ -1,8 +1,16 @@
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 4.0"
}
}
}
provider "google" {
credentials = file(var.keyfile_location)
region = var.region
project = var.gcp_project_id
version = "~> 3.48"
}
module "kubernetes" {
@ -13,12 +21,17 @@ module "kubernetes" {
machines = var.machines
ssh_pub_key = var.ssh_pub_key
master_sa_email = var.master_sa_email
master_sa_scopes = var.master_sa_scopes
worker_sa_email = var.worker_sa_email
worker_sa_scopes = var.worker_sa_scopes
master_sa_email = var.master_sa_email
master_sa_scopes = var.master_sa_scopes
master_preemptible = var.master_preemptible
master_additional_disk_type = var.master_additional_disk_type
worker_sa_email = var.worker_sa_email
worker_sa_scopes = var.worker_sa_scopes
worker_preemptible = var.worker_preemptible
worker_additional_disk_type = var.worker_additional_disk_type
ssh_whitelist = var.ssh_whitelist
api_server_whitelist = var.api_server_whitelist
nodeport_whitelist = var.nodeport_whitelist
ingress_whitelist = var.ingress_whitelist
}

View file

@ -5,6 +5,8 @@
resource "google_compute_network" "main" {
name = "${var.prefix}-network"
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "main" {
@ -20,6 +22,8 @@ resource "google_compute_firewall" "deny_all" {
priority = 1000
source_ranges = ["0.0.0.0/0"]
deny {
protocol = "all"
}
@ -39,6 +43,8 @@ resource "google_compute_firewall" "allow_internal" {
}
resource "google_compute_firewall" "ssh" {
count = length(var.ssh_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-ssh-firewall"
network = google_compute_network.main.name
@ -53,6 +59,8 @@ resource "google_compute_firewall" "ssh" {
}
resource "google_compute_firewall" "api_server" {
count = length(var.api_server_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-api-server-firewall"
network = google_compute_network.main.name
@ -67,6 +75,8 @@ resource "google_compute_firewall" "api_server" {
}
resource "google_compute_firewall" "nodeport" {
count = length(var.nodeport_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-nodeport-firewall"
network = google_compute_network.main.name
@ -81,11 +91,15 @@ resource "google_compute_firewall" "nodeport" {
}
resource "google_compute_firewall" "ingress_http" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-http-ingress-firewall"
network = google_compute_network.main.name
priority = 100
source_ranges = var.ingress_whitelist
allow {
protocol = "tcp"
ports = ["80"]
@ -93,11 +107,15 @@ resource "google_compute_firewall" "ingress_http" {
}
resource "google_compute_firewall" "ingress_https" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-https-ingress-firewall"
network = google_compute_network.main.name
priority = 100
source_ranges = var.ingress_whitelist
allow {
protocol = "tcp"
ports = ["443"]
@ -173,7 +191,7 @@ resource "google_compute_disk" "master" {
}
name = "${var.prefix}-${each.key}"
type = "pd-ssd"
type = var.master_additional_disk_type
zone = each.value.machine.zone
size = each.value.disk_size
@ -229,19 +247,28 @@ resource "google_compute_instance" "master" {
# Since we use google_compute_attached_disk we need to ignore this
lifecycle {
ignore_changes = ["attached_disk"]
ignore_changes = [attached_disk]
}
scheduling {
preemptible = var.master_preemptible
automatic_restart = !var.master_preemptible
}
}
resource "google_compute_forwarding_rule" "master_lb" {
count = length(var.api_server_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-master-lb-forward-rule"
port_range = "6443"
target = google_compute_target_pool.master_lb.id
target = google_compute_target_pool.master_lb[count.index].id
}
resource "google_compute_target_pool" "master_lb" {
count = length(var.api_server_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-master-lb-pool"
instances = local.master_target_list
}
@ -258,7 +285,7 @@ resource "google_compute_disk" "worker" {
}
name = "${var.prefix}-${each.key}"
type = "pd-ssd"
type = var.worker_additional_disk_type
zone = each.value.machine.zone
size = each.value.disk_size
@ -326,35 +353,48 @@ resource "google_compute_instance" "worker" {
# Since we use google_compute_attached_disk we need to ignore this
lifecycle {
ignore_changes = ["attached_disk"]
ignore_changes = [attached_disk]
}
scheduling {
preemptible = var.worker_preemptible
automatic_restart = !var.worker_preemptible
}
}
resource "google_compute_address" "worker_lb" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-worker-lb-address"
address_type = "EXTERNAL"
region = var.region
}
resource "google_compute_forwarding_rule" "worker_http_lb" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-worker-http-lb-forward-rule"
ip_address = google_compute_address.worker_lb.address
ip_address = google_compute_address.worker_lb[count.index].address
port_range = "80"
target = google_compute_target_pool.worker_lb.id
target = google_compute_target_pool.worker_lb[count.index].id
}
resource "google_compute_forwarding_rule" "worker_https_lb" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-worker-https-lb-forward-rule"
ip_address = google_compute_address.worker_lb.address
ip_address = google_compute_address.worker_lb[count.index].address
port_range = "443"
target = google_compute_target_pool.worker_lb.id
target = google_compute_target_pool.worker_lb[count.index].id
}
resource "google_compute_target_pool" "worker_lb" {
count = length(var.ingress_whitelist) > 0 ? 1 : 0
name = "${var.prefix}-worker-lb-pool"
instances = local.worker_target_list
}

View file

@ -19,9 +19,9 @@ output "worker_ip_addresses" {
}
output "ingress_controller_lb_ip_address" {
value = google_compute_address.worker_lb.address
value = length(var.ingress_whitelist) > 0 ? google_compute_address.worker_lb.0.address : ""
}
output "control_plane_lb_ip_address" {
value = google_compute_forwarding_rule.master_lb.ip_address
value = length(var.api_server_whitelist) > 0 ? google_compute_forwarding_rule.master_lb.0.ip_address : ""
}

View file

@ -27,6 +27,14 @@ variable "master_sa_scopes" {
type = list(string)
}
variable "master_preemptible" {
type = bool
}
variable "master_additional_disk_type" {
type = string
}
variable "worker_sa_email" {
type = string
}
@ -35,6 +43,14 @@ variable "worker_sa_scopes" {
type = list(string)
}
variable "worker_preemptible" {
type = bool
}
variable "worker_additional_disk_type" {
type = string
}
variable "ssh_pub_key" {}
variable "ssh_whitelist" {
@ -49,6 +65,11 @@ variable "nodeport_whitelist" {
type = list(string)
}
variable "ingress_whitelist" {
type = list(string)
default = ["0.0.0.0/0"]
}
variable "private_network_cidr" {
default = "10.0.10.0/24"
}

View file

@ -16,6 +16,9 @@
"nodeport_whitelist": [
"1.2.3.4/32"
],
"ingress_whitelist": [
"0.0.0.0/0"
],
"machines": {
"master-0": {
@ -24,7 +27,7 @@
"zone": "us-central1-a",
"additional_disks": {},
"boot_disk": {
"image_name": "ubuntu-os-cloud/ubuntu-1804-bionic-v20201116",
"image_name": "ubuntu-os-cloud/ubuntu-2004-focal-v20220118",
"size": 50
}
},
@ -38,7 +41,7 @@
}
},
"boot_disk": {
"image_name": "ubuntu-os-cloud/ubuntu-1804-bionic-v20201116",
"image_name": "ubuntu-os-cloud/ubuntu-2004-focal-v20220118",
"size": 50
}
},
@ -52,7 +55,7 @@
}
},
"boot_disk": {
"image_name": "ubuntu-os-cloud/ubuntu-1804-bionic-v20201116",
"image_name": "ubuntu-os-cloud/ubuntu-2004-focal-v20220118",
"size": 50
}
}

View file

@ -44,6 +44,16 @@ variable "master_sa_scopes" {
default = ["https://www.googleapis.com/auth/cloud-platform"]
}
variable "master_preemptible" {
type = bool
default = false
}
variable "master_additional_disk_type" {
type = string
default = "pd-ssd"
}
variable "worker_sa_email" {
type = string
default = ""
@ -54,6 +64,16 @@ variable "worker_sa_scopes" {
default = ["https://www.googleapis.com/auth/cloud-platform"]
}
variable "worker_preemptible" {
type = bool
default = false
}
variable "worker_additional_disk_type" {
type = string
default = "pd-ssd"
}
variable ssh_pub_key {
description = "Path to public SSH key file which is injected into the VMs."
type = string
@ -70,3 +90,8 @@ variable api_server_whitelist {
variable nodeport_whitelist {
type = list(string)
}
variable "ingress_whitelist" {
type = list(string)
default = ["0.0.0.0/0"]
}

View file

@ -0,0 +1,108 @@
# Kubernetes on Hetzner with Terraform
Provision a Kubernetes cluster on [Hetzner](https://www.hetzner.com/cloud) using Terraform and Kubespray
## Overview
The setup looks like following
```text
Kubernetes cluster
+--------------------------+
| +--------------+ |
| | +--------------+ |
| --> | | | |
| | | Master/etcd | |
| | | node(s) | |
| +-+ | |
| +--------------+ |
| ^ |
| | |
| v |
| +--------------+ |
| | +--------------+ |
| --> | | | |
| | | Worker | |
| | | node(s) | |
| +-+ | |
| +--------------+ |
+--------------------------+
```
The nodes uses a private network for node to node communication and a public interface for all external communication.
## Requirements
* Terraform 0.14.0 or newer
## Quickstart
NOTE: Assumes you are at the root of the kubespray repo.
For authentication in your cluster you can use the environment variables.
```bash
export HCLOUD_TOKEN=api-token
```
Copy the cluster configuration file.
```bash
CLUSTER=my-hetzner-cluster
cp -r inventory/sample inventory/$CLUSTER
cp contrib/terraform/hetzner/default.tfvars inventory/$CLUSTER/
cd inventory/$CLUSTER
```
Edit `default.tfvars` to match your requirement.
Run Terraform to create the infrastructure.
```bash
terraform init ../../contrib/terraform/hetzner
terraform apply --var-file default.tfvars ../../contrib/terraform/hetzner/
```
You should now have a inventory file named `inventory.ini` that you can use with kubespray.
You can use the inventory file with kubespray to set up a cluster.
It is a good idea to check that you have basic SSH connectivity to the nodes. You can do that by:
```bash
ansible -i inventory.ini -m ping all
```
You can setup Kubernetes with kubespray using the generated inventory:
```bash
ansible-playbook -i inventory.ini ../../cluster.yml -b -v
```
## Cloud controller
For better support with the cloud you can install the [hcloud cloud controller](https://github.com/hetznercloud/hcloud-cloud-controller-manager) and [CSI driver](https://github.com/hetznercloud/csi-driver).
Please read the instructions in both repos on how to install it.
## Teardown
You can teardown your infrastructure using the following Terraform command:
```bash
terraform destroy --var-file default.tfvars ../../contrib/terraform/hetzner
```
## Variables
* `prefix`: Prefix to add to all resources, if set to "" don't set any prefix
* `ssh_public_keys`: List of public SSH keys to install on all machines
* `zone`: The zone where to run the cluster
* `network_zone`: the network zone where the cluster is running
* `machines`: Machines to provision. Key of this object will be used as the name of the machine
* `node_type`: The role of this node *(master|worker)*
* `size`: Size of the VM
* `image`: The image to use for the VM
* `ssh_whitelist`: List of IP ranges (CIDR) that will be allowed to ssh to the nodes
* `api_server_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the API server
* `nodeport_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the kubernetes nodes on port 30000-32767 (kubernetes nodeports)
* `ingress_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to kubernetes workers on port 80 and 443

View file

@ -0,0 +1,44 @@
prefix = "default"
zone = "hel1"
network_zone = "eu-central"
inventory_file = "inventory.ini"
ssh_public_keys = [
# Put your public SSH key here
"ssh-rsa I-did-not-read-the-docs",
"ssh-rsa I-did-not-read-the-docs 2",
]
machines = {
"master-0" : {
"node_type" : "master",
"size" : "cx21",
"image" : "ubuntu-20.04",
},
"worker-0" : {
"node_type" : "worker",
"size" : "cx21",
"image" : "ubuntu-20.04",
},
"worker-1" : {
"node_type" : "worker",
"size" : "cx21",
"image" : "ubuntu-20.04",
}
}
nodeport_whitelist = [
"0.0.0.0/0"
]
ingress_whitelist = [
"0.0.0.0/0"
]
ssh_whitelist = [
"0.0.0.0/0"
]
api_server_whitelist = [
"0.0.0.0/0"
]

View file

@ -0,0 +1,52 @@
provider "hcloud" {}
module "kubernetes" {
source = "./modules/kubernetes-cluster"
prefix = var.prefix
zone = var.zone
machines = var.machines
ssh_public_keys = var.ssh_public_keys
network_zone = var.network_zone
ssh_whitelist = var.ssh_whitelist
api_server_whitelist = var.api_server_whitelist
nodeport_whitelist = var.nodeport_whitelist
ingress_whitelist = var.ingress_whitelist
}
#
# Generate ansible inventory
#
data "template_file" "inventory" {
template = file("${path.module}/templates/inventory.tpl")
vars = {
connection_strings_master = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s ip=%s etcd_member_name=etcd%d",
keys(module.kubernetes.master_ip_addresses),
values(module.kubernetes.master_ip_addresses).*.public_ip,
values(module.kubernetes.master_ip_addresses).*.private_ip,
range(1, length(module.kubernetes.master_ip_addresses) + 1)))
connection_strings_worker = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s ip=%s",
keys(module.kubernetes.worker_ip_addresses),
values(module.kubernetes.worker_ip_addresses).*.public_ip,
values(module.kubernetes.worker_ip_addresses).*.private_ip))
list_master = join("\n", keys(module.kubernetes.master_ip_addresses))
list_worker = join("\n", keys(module.kubernetes.worker_ip_addresses))
network_id = module.kubernetes.network_id
}
}
resource "null_resource" "inventories" {
provisioner "local-exec" {
command = "echo '${data.template_file.inventory.rendered}' > ${var.inventory_file}"
}
triggers = {
template = data.template_file.inventory.rendered
}
}

View file

@ -0,0 +1,122 @@
resource "hcloud_network" "kubernetes" {
name = "${var.prefix}-network"
ip_range = var.private_network_cidr
}
resource "hcloud_network_subnet" "kubernetes" {
type = "cloud"
network_id = hcloud_network.kubernetes.id
network_zone = var.network_zone
ip_range = var.private_subnet_cidr
}
resource "hcloud_server" "master" {
for_each = {
for name, machine in var.machines :
name => machine
if machine.node_type == "master"
}
name = "${var.prefix}-${each.key}"
image = each.value.image
server_type = each.value.size
location = var.zone
user_data = templatefile(
"${path.module}/templates/cloud-init.tmpl",
{
ssh_public_keys = var.ssh_public_keys
}
)
firewall_ids = [hcloud_firewall.master.id]
}
resource "hcloud_server_network" "master" {
for_each = hcloud_server.master
server_id = each.value.id
subnet_id = hcloud_network_subnet.kubernetes.id
}
resource "hcloud_server" "worker" {
for_each = {
for name, machine in var.machines :
name => machine
if machine.node_type == "worker"
}
name = "${var.prefix}-${each.key}"
image = each.value.image
server_type = each.value.size
location = var.zone
user_data = templatefile(
"${path.module}/templates/cloud-init.tmpl",
{
ssh_public_keys = var.ssh_public_keys
}
)
firewall_ids = [hcloud_firewall.worker.id]
}
resource "hcloud_server_network" "worker" {
for_each = hcloud_server.worker
server_id = each.value.id
subnet_id = hcloud_network_subnet.kubernetes.id
}
resource "hcloud_firewall" "master" {
name = "${var.prefix}-master-firewall"
rule {
direction = "in"
protocol = "tcp"
port = "22"
source_ips = var.ssh_whitelist
}
rule {
direction = "in"
protocol = "tcp"
port = "6443"
source_ips = var.api_server_whitelist
}
}
resource "hcloud_firewall" "worker" {
name = "${var.prefix}-worker-firewall"
rule {
direction = "in"
protocol = "tcp"
port = "22"
source_ips = var.ssh_whitelist
}
rule {
direction = "in"
protocol = "tcp"
port = "80"
source_ips = var.ingress_whitelist
}
rule {
direction = "in"
protocol = "tcp"
port = "443"
source_ips = var.ingress_whitelist
}
rule {
direction = "in"
protocol = "tcp"
port = "30000-32767"
source_ips = var.nodeport_whitelist
}
}

View file

@ -0,0 +1,27 @@
output "master_ip_addresses" {
value = {
for key, instance in hcloud_server.master :
instance.name => {
"private_ip" = hcloud_server_network.master[key].ip
"public_ip" = hcloud_server.master[key].ipv4_address
}
}
}
output "worker_ip_addresses" {
value = {
for key, instance in hcloud_server.worker :
instance.name => {
"private_ip" = hcloud_server_network.worker[key].ip
"public_ip" = hcloud_server.worker[key].ipv4_address
}
}
}
output "cluster_private_network_cidr" {
value = var.private_subnet_cidr
}
output "network_id" {
value = hcloud_network.kubernetes.id
}

View file

@ -0,0 +1,17 @@
#cloud-config
users:
- default
- name: ubuntu
shell: /bin/bash
sudo: "ALL=(ALL) NOPASSWD:ALL"
ssh_authorized_keys:
%{ for ssh_public_key in ssh_public_keys ~}
- ${ssh_public_key}
%{ endfor ~}
ssh_authorized_keys:
%{ for ssh_public_key in ssh_public_keys ~}
- ${ssh_public_key}
%{ endfor ~}

View file

@ -0,0 +1,44 @@
variable "zone" {
type = string
}
variable "prefix" {}
variable "machines" {
type = map(object({
node_type = string
size = string
image = string
}))
}
variable "ssh_public_keys" {
type = list(string)
}
variable "ssh_whitelist" {
type = list(string)
}
variable "api_server_whitelist" {
type = list(string)
}
variable "nodeport_whitelist" {
type = list(string)
}
variable "ingress_whitelist" {
type = list(string)
}
variable "private_network_cidr" {
default = "10.0.0.0/16"
}
variable "private_subnet_cidr" {
default = "10.0.10.0/24"
}
variable "network_zone" {
default = "eu-central"
}

View file

@ -0,0 +1,9 @@
terraform {
required_providers {
hcloud = {
source = "hetznercloud/hcloud"
version = "1.31.1"
}
}
required_version = ">= 0.14"
}

View file

@ -0,0 +1,7 @@
output "master_ips" {
value = module.kubernetes.master_ip_addresses
}
output "worker_ips" {
value = module.kubernetes.worker_ip_addresses
}

View file

@ -0,0 +1,19 @@
[all]
${connection_strings_master}
${connection_strings_worker}
[kube_control_plane]
${list_master}
[etcd]
${list_master}
[kube_node]
${list_worker}
[k8s_cluster:children]
kube-master
kube-node
[k8s_cluster:vars]
network_id=${network_id}

View file

@ -0,0 +1,50 @@
variable "zone" {
description = "The zone where to run the cluster"
}
variable "network_zone" {
description = "The network zone where the cluster is running"
default = "eu-central"
}
variable "prefix" {
description = "Prefix for resource names"
default = "default"
}
variable "machines" {
description = "Cluster machines"
type = map(object({
node_type = string
size = string
image = string
}))
}
variable "ssh_public_keys" {
description = "Public SSH key which are injected into the VMs."
type = list(string)
}
variable "ssh_whitelist" {
description = "List of IP ranges (CIDR) to whitelist for ssh"
type = list(string)
}
variable "api_server_whitelist" {
description = "List of IP ranges (CIDR) to whitelist for kubernetes api server"
type = list(string)
}
variable "nodeport_whitelist" {
description = "List of IP ranges (CIDR) to whitelist for kubernetes nodeports"
type = list(string)
}
variable "ingress_whitelist" {
description = "List of IP ranges (CIDR) to whitelist for HTTP"
type = list(string)
}
variable "inventory_file" {
description = "Where to store the generated inventory file"
}

View file

@ -0,0 +1,15 @@
terraform {
required_providers {
hcloud = {
source = "hetznercloud/hcloud"
version = "1.31.1"
}
null = {
source = "hashicorp/null"
}
template = {
source = "hashicorp/template"
}
}
required_version = ">= 0.14"
}

View file

@ -35,7 +35,7 @@ now six total etcd replicas.
## Requirements
- [Install Terraform](https://www.terraform.io/intro/getting-started/install.html)
- Install dependencies: `sudo pip install -r requirements.txt`
- [Install Ansible dependencies](/docs/ansible.md#installing-ansible)
- Account with Equinix Metal
- An SSH key pair
@ -60,9 +60,9 @@ Terraform will be used to provision all of the Equinix Metal resources with base
Create an inventory directory for your cluster by copying the existing sample and linking the `hosts` script (used to build the inventory based on Terraform state):
```ShellSession
cp -LRp contrib/terraform/packet/sample-inventory inventory/$CLUSTER
cp -LRp contrib/terraform/metal/sample-inventory inventory/$CLUSTER
cd inventory/$CLUSTER
ln -s ../../contrib/terraform/packet/hosts
ln -s ../../contrib/terraform/metal/hosts
```
This will be the base for subsequent Terraform commands.
@ -101,7 +101,7 @@ This helps when identifying which hosts are associated with each cluster.
While the defaults in variables.tf will successfully deploy a cluster, it is recommended to set the following values:
- cluster_name = the name of the inventory directory created above as $CLUSTER
- packet_project_id = the Equinix Metal Project ID associated with the Equinix Metal API token above
- metal_project_id = the Equinix Metal Project ID associated with the Equinix Metal API token above
#### Enable localhost access
@ -119,7 +119,7 @@ Once the Kubespray playbooks are run, a Kubernetes configuration file will be wr
In the cluster's inventory folder, the following files might be created (either by Terraform
or manually), to prevent you from pushing them accidentally they are in a
`.gitignore` file in the `terraform/packet` directory :
`.gitignore` file in the `terraform/metal` directory :
- `.terraform`
- `.tfvars`
@ -135,7 +135,7 @@ plugins. This is accomplished as follows:
```ShellSession
cd inventory/$CLUSTER
terraform init ../../contrib/terraform/packet
terraform init ../../contrib/terraform/metal
```
This should finish fairly quickly telling you Terraform has successfully initialized and loaded necessary modules.
@ -146,7 +146,7 @@ You can apply the Terraform configuration to your cluster with the following com
issued from your cluster's inventory directory (`inventory/$CLUSTER`):
```ShellSession
terraform apply -var-file=cluster.tfvars ../../contrib/terraform/packet
terraform apply -var-file=cluster.tfvars ../../contrib/terraform/metal
export ANSIBLE_HOST_KEY_CHECKING=False
ansible-playbook -i hosts ../../cluster.yml
```
@ -156,7 +156,7 @@ ansible-playbook -i hosts ../../cluster.yml
You can destroy your new cluster with the following command issued from the cluster's inventory directory:
```ShellSession
terraform destroy -var-file=cluster.tfvars ../../contrib/terraform/packet
terraform destroy -var-file=cluster.tfvars ../../contrib/terraform/metal
```
If you've started the Ansible run, it may also be a good idea to do some manual cleanup:

View file

@ -1,16 +1,15 @@
# Configure the Equinix Metal Provider
provider "packet" {
version = "~> 2.0"
provider "metal" {
}
resource "packet_ssh_key" "k8s" {
resource "metal_ssh_key" "k8s" {
count = var.public_key_path != "" ? 1 : 0
name = "kubernetes-${var.cluster_name}"
public_key = chomp(file(var.public_key_path))
}
resource "packet_device" "k8s_master" {
depends_on = [packet_ssh_key.k8s]
resource "metal_device" "k8s_master" {
depends_on = [metal_ssh_key.k8s]
count = var.number_of_k8s_masters
hostname = "${var.cluster_name}-k8s-master-${count.index + 1}"
@ -18,12 +17,12 @@ resource "packet_device" "k8s_master" {
facilities = [var.facility]
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.packet_project_id
project_id = var.metal_project_id
tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_control_plane", "etcd", "kube_node"]
}
resource "packet_device" "k8s_master_no_etcd" {
depends_on = [packet_ssh_key.k8s]
resource "metal_device" "k8s_master_no_etcd" {
depends_on = [metal_ssh_key.k8s]
count = var.number_of_k8s_masters_no_etcd
hostname = "${var.cluster_name}-k8s-master-${count.index + 1}"
@ -31,12 +30,12 @@ resource "packet_device" "k8s_master_no_etcd" {
facilities = [var.facility]
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.packet_project_id
project_id = var.metal_project_id
tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_control_plane"]
}
resource "packet_device" "k8s_etcd" {
depends_on = [packet_ssh_key.k8s]
resource "metal_device" "k8s_etcd" {
depends_on = [metal_ssh_key.k8s]
count = var.number_of_etcd
hostname = "${var.cluster_name}-etcd-${count.index + 1}"
@ -44,12 +43,12 @@ resource "packet_device" "k8s_etcd" {
facilities = [var.facility]
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.packet_project_id
project_id = var.metal_project_id
tags = ["cluster-${var.cluster_name}", "etcd"]
}
resource "packet_device" "k8s_node" {
depends_on = [packet_ssh_key.k8s]
resource "metal_device" "k8s_node" {
depends_on = [metal_ssh_key.k8s]
count = var.number_of_k8s_nodes
hostname = "${var.cluster_name}-k8s-node-${count.index + 1}"
@ -57,7 +56,7 @@ resource "packet_device" "k8s_node" {
facilities = [var.facility]
operating_system = var.operating_system
billing_cycle = var.billing_cycle
project_id = var.packet_project_id
project_id = var.metal_project_id
tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_node"]
}

View file

@ -0,0 +1,16 @@
output "k8s_masters" {
value = metal_device.k8s_master.*.access_public_ipv4
}
output "k8s_masters_no_etc" {
value = metal_device.k8s_master_no_etcd.*.access_public_ipv4
}
output "k8s_etcds" {
value = metal_device.k8s_etcd.*.access_public_ipv4
}
output "k8s_nodes" {
value = metal_device.k8s_node.*.access_public_ipv4
}

View file

@ -2,7 +2,7 @@
cluster_name = "mycluster"
# Your Equinix Metal project ID. See hhttps://metal.equinix.com/developers/docs/accounts/
packet_project_id = "Example-API-Token"
metal_project_id = "Example-API-Token"
# The public SSH key to be uploaded into authorized_keys in bare metal Equinix Metal nodes provisioned
# leave this value blank if the public key is already setup in the Equinix Metal project

View file

@ -2,12 +2,12 @@ variable "cluster_name" {
default = "kubespray"
}
variable "packet_project_id" {
variable "metal_project_id" {
description = "Your Equinix Metal project ID. See https://metal.equinix.com/developers/docs/accounts/"
}
variable "operating_system" {
default = "ubuntu_16_04"
default = "ubuntu_20_04"
}
variable "public_key_path" {
@ -24,23 +24,23 @@ variable "facility" {
}
variable "plan_k8s_masters" {
default = "c2.medium.x86"
default = "c3.small.x86"
}
variable "plan_k8s_masters_no_etcd" {
default = "c2.medium.x86"
default = "c3.small.x86"
}
variable "plan_etcd" {
default = "c2.medium.x86"
default = "c3.small.x86"
}
variable "plan_k8s_nodes" {
default = "c2.medium.x86"
default = "c3.medium.x86"
}
variable "number_of_k8s_masters" {
default = 0
default = 1
}
variable "number_of_k8s_masters_no_etcd" {
@ -52,6 +52,6 @@ variable "number_of_etcd" {
}
variable "number_of_k8s_nodes" {
default = 0
default = 1
}

View file

@ -2,8 +2,8 @@
terraform {
required_version = ">= 0.12"
required_providers {
packet = {
source = "terraform-providers/packet"
metal = {
source = "equinix/metal"
}
}
}

View file

@ -17,9 +17,10 @@ most modern installs of OpenStack that support the basic services.
- [ELASTX](https://elastx.se/)
- [EnterCloudSuite](https://www.entercloudsuite.com/)
- [FugaCloud](https://fuga.cloud/)
- [Open Telekom Cloud](https://cloud.telekom.de/) : requires to set the variable `wait_for_floatingip = "true"` in your cluster.tfvars
- [Open Telekom Cloud](https://cloud.telekom.de/)
- [OVH](https://www.ovh.com/)
- [Rackspace](https://www.rackspace.com/)
- [Safespring](https://www.safespring.com)
- [Ultimum](https://ultimum.io/)
- [VexxHost](https://vexxhost.com/)
- [Zetta](https://www.zetta.io/)
@ -247,10 +248,12 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|`cluster_name` | All OpenStack resources will use the Terraform variable`cluster_name` (default`example`) in their name to make it easier to track. For example the first compute resource will be named`example-kubernetes-1`. |
|`az_list` | List of Availability Zones available in your OpenStack cluster. |
|`network_name` | The name to be given to the internal network that will be generated |
|`use_existing_network`| Use an existing network with the name of `network_name`. `false` by default |
|`network_dns_domain` | (Optional) The dns_domain for the internal network that will be generated |
|`dns_nameservers`| An array of DNS name server names to be used by hosts in the internal subnet. |
|`floatingip_pool` | Name of the pool from which floating IPs will be allocated |
|`k8s_master_fips` | A list of floating IPs that you have already pre-allocated; they will be attached to master nodes instead of creating new random floating IPs. |
|`bastion_fips` | A list of floating IPs that you have already pre-allocated; they will be attached to bastion node instead of creating new random floating IPs. |
|`external_net` | UUID of the external network that will be routed to |
|`flavor_k8s_master`,`flavor_k8s_node`,`flavor_etcd`, `flavor_bastion`,`flavor_gfs_node` | Flavor depends on your openstack installation, you can get available flavor IDs through `openstack flavor list` |
|`image`,`image_gfs` | Name of the image to use in provisioning the compute resources. Should already be loaded into glance. |
@ -267,26 +270,33 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|`supplementary_node_groups` | To add ansible groups to the nodes, such as `kube_ingress` for running ingress controller pods, empty by default. |
|`bastion_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, `["0.0.0.0/0"]` by default |
|`master_allowed_remote_ips` | List of CIDR blocks allowed to initiate an API connection, `["0.0.0.0/0"]` by default |
|`bastion_allowed_ports` | List of ports to open on bastion node, `[]` by default |
|`k8s_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, empty by default |
|`worker_allowed_ports` | List of ports to open on worker nodes, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "0.0.0.0/0"}]` by default |
|`master_allowed_ports` | List of ports to open on master nodes, expected format is `[{ "protocol" = "tcp", "port_range_min" = 443, "port_range_max" = 443, "remote_ip_prefix" = "0.0.0.0/0"}]`, empty by default |
|`wait_for_floatingip` | Let Terraform poll the instance until the floating IP has been associated, `false` by default. |
|`node_root_volume_size_in_gb` | Size of the root volume for nodes, 0 to use ephemeral storage |
|`master_root_volume_size_in_gb` | Size of the root volume for masters, 0 to use ephemeral storage |
|`master_volume_type` | Volume type of the root volume for control_plane, 'Default' by default |
|`node_volume_type` | Volume type of the root volume for nodes, 'Default' by default |
|`gfs_root_volume_size_in_gb` | Size of the root volume for gluster, 0 to use ephemeral storage |
|`etcd_root_volume_size_in_gb` | Size of the root volume for etcd nodes, 0 to use ephemeral storage |
|`bastion_root_volume_size_in_gb` | Size of the root volume for bastions, 0 to use ephemeral storage |
|`use_server_group` | Create and use openstack nova servergroups, default: false |
|`master_server_group_policy` | Enable and use openstack nova servergroups for masters with set policy, default: "" (disabled) |
|`node_server_group_policy` | Enable and use openstack nova servergroups for nodes with set policy, default: "" (disabled) |
|`etcd_server_group_policy` | Enable and use openstack nova servergroups for etcd with set policy, default: "" (disabled) |
|`use_access_ip` | If 1, nodes with floating IPs will transmit internal cluster traffic via floating IPs; if 0 private IPs will be used instead. Default value is 1. |
|`port_security_enabled` | Allow to disable port security by setting this to `false`. `true` by default |
|`force_null_port_security` | Set `null` instead of `true` or `false` for `port_security`. `false` by default |
|`k8s_nodes` | Map containing worker node definition, see explanation below |
|`k8s_masters` | Map containing master node definition, see explanation for k8s_nodes and `sample-inventory/cluster.tfvars` |
##### k8s_nodes
Allows a custom definition of worker nodes giving the operator full control over individual node flavor and
availability zone placement. To enable the use of this mode set the `number_of_k8s_nodes` and
`number_of_k8s_nodes_no_floating_ip` variables to 0. Then define your desired worker node configuration
using the `k8s_nodes` variable.
using the `k8s_nodes` variable. The `az`, `flavor` and `floating_ip` parameters are mandatory.
The optional parameter `extra_groups` (a comma-delimited string) can be used to define extra inventory group memberships for specific nodes.
For example:
@ -306,6 +316,7 @@ k8s_nodes = {
"az" = "sto3"
"flavor" = "83d8b44a-26a0-4f02-a981-079446926445"
"floating_ip" = true
"extra_groups" = "calico_rr"
}
}
```
@ -407,18 +418,39 @@ plugins. This is accomplished as follows:
```ShellSession
cd inventory/$CLUSTER
terraform init ../../contrib/terraform/openstack
terraform -chdir="../../contrib/terraform/openstack" init
```
This should finish fairly quickly telling you Terraform has successfully initialized and loaded necessary modules.
### Customizing with cloud-init
You can apply cloud-init based customization for the openstack instances before provisioning your cluster.
One common template is used for all instances. Adjust the file shown below:
`contrib/terraform/openstack/modules/compute/templates/cloudinit.yaml`
For example, to enable openstack novnc access and ansible_user=root SSH access:
```ShellSession
#cloud-config
## in some cases novnc console access is required
## it requires ssh password to be set
ssh_pwauth: yes
chpasswd:
list: |
root:secret
expire: False
## in some cases direct root ssh access via ssh key is required
disable_root: false
```
### Provisioning cluster
You can apply the Terraform configuration to your cluster with the following command
issued from your cluster's inventory directory (`inventory/$CLUSTER`):
```ShellSession
terraform apply -var-file=cluster.tfvars ../../contrib/terraform/openstack
terraform -chdir="../../contrib/terraform/openstack" apply -var-file=cluster.tfvars
```
if you chose to create a bastion host, this script will create
@ -433,7 +465,7 @@ pick it up automatically.
You can destroy your new cluster with the following command issued from the cluster's inventory directory:
```ShellSession
terraform destroy -var-file=cluster.tfvars ../../contrib/terraform/openstack
terraform -chdir="../../contrib/terraform/openstack" destroy -var-file=cluster.tfvars
```
If you've started the Ansible run, it may also be a good idea to do some manual cleanup:

View file

@ -1,14 +1,15 @@
module "network" {
source = "./modules/network"
external_net = var.external_net
network_name = var.network_name
subnet_cidr = var.subnet_cidr
cluster_name = var.cluster_name
dns_nameservers = var.dns_nameservers
network_dns_domain = var.network_dns_domain
use_neutron = var.use_neutron
router_id = var.router_id
external_net = var.external_net
network_name = var.network_name
subnet_cidr = var.subnet_cidr
cluster_name = var.cluster_name
dns_nameservers = var.dns_nameservers
network_dns_domain = var.network_dns_domain
use_neutron = var.use_neutron
port_security_enabled = var.port_security_enabled
router_id = var.router_id
}
module "ips" {
@ -23,7 +24,9 @@ module "ips" {
network_name = var.network_name
router_id = module.network.router_id
k8s_nodes = var.k8s_nodes
k8s_masters = var.k8s_masters
k8s_master_fips = var.k8s_master_fips
bastion_fips = var.bastion_fips
router_internal_port_id = module.network.router_internal_port_id
}
@ -42,6 +45,7 @@ module "compute" {
number_of_bastions = var.number_of_bastions
number_of_k8s_nodes_no_floating_ip = var.number_of_k8s_nodes_no_floating_ip
number_of_gfs_nodes_no_floating_ip = var.number_of_gfs_nodes_no_floating_ip
k8s_masters = var.k8s_masters
k8s_nodes = var.k8s_nodes
bastion_root_volume_size_in_gb = var.bastion_root_volume_size_in_gb
etcd_root_volume_size_in_gb = var.etcd_root_volume_size_in_gb
@ -50,6 +54,7 @@ module "compute" {
gfs_root_volume_size_in_gb = var.gfs_root_volume_size_in_gb
gfs_volume_size_in_gb = var.gfs_volume_size_in_gb
master_volume_type = var.master_volume_type
node_volume_type = var.node_volume_type
public_key_path = var.public_key_path
image = var.image
image_uuid = var.image_uuid
@ -67,6 +72,7 @@ module "compute" {
flavor_bastion = var.flavor_bastion
k8s_master_fips = module.ips.k8s_master_fips
k8s_master_no_etcd_fips = module.ips.k8s_master_no_etcd_fips
k8s_masters_fips = module.ips.k8s_masters_fips
k8s_node_fips = module.ips.k8s_node_fips
k8s_nodes_fips = module.ips.k8s_nodes_fips
bastion_fips = module.ips.bastion_fips
@ -78,14 +84,24 @@ module "compute" {
supplementary_node_groups = var.supplementary_node_groups
master_allowed_ports = var.master_allowed_ports
worker_allowed_ports = var.worker_allowed_ports
wait_for_floatingip = var.wait_for_floatingip
bastion_allowed_ports = var.bastion_allowed_ports
use_access_ip = var.use_access_ip
use_server_groups = var.use_server_groups
master_server_group_policy = var.master_server_group_policy
node_server_group_policy = var.node_server_group_policy
etcd_server_group_policy = var.etcd_server_group_policy
extra_sec_groups = var.extra_sec_groups
extra_sec_groups_name = var.extra_sec_groups_name
group_vars_path = var.group_vars_path
port_security_enabled = var.port_security_enabled
force_null_port_security = var.force_null_port_security
network_router_id = module.network.router_id
network_id = module.network.network_id
use_existing_network = var.use_existing_network
private_subnet_id = module.network.subnet_id
network_id = module.network.router_id
depends_on = [
module.network.subnet_id
]
}
output "private_subnet_id" {
@ -101,7 +117,7 @@ output "router_id" {
}
output "k8s_master_fips" {
value = concat(module.ips.k8s_master_fips, module.ips.k8s_master_no_etcd_fips)
value = var.number_of_k8s_masters + var.number_of_k8s_masters_no_etcd > 0 ? concat(module.ips.k8s_master_fips, module.ips.k8s_master_no_etcd_fips) : [for key, value in module.ips.k8s_masters_fips : value.address]
}
output "k8s_node_fips" {

View file

@ -15,6 +15,18 @@ data "openstack_images_image_v2" "image_master" {
name = var.image_master == "" ? var.image : var.image_master
}
data "cloudinit_config" "cloudinit" {
part {
content_type = "text/cloud-config"
content = file("${path.module}/templates/cloudinit.yaml")
}
}
data "openstack_networking_network_v2" "k8s_network" {
count = var.use_existing_network ? 1 : 0
name = var.network_name
}
resource "openstack_compute_keypair_v2" "k8s" {
name = "kubernetes-${var.cluster_name}"
public_key = chomp(file(var.public_key_path))
@ -73,6 +85,17 @@ resource "openstack_networking_secgroup_rule_v2" "bastion" {
security_group_id = openstack_networking_secgroup_v2.bastion[0].id
}
resource "openstack_networking_secgroup_rule_v2" "k8s_bastion_ports" {
count = length(var.bastion_allowed_ports)
direction = "ingress"
ethertype = "IPv4"
protocol = lookup(var.bastion_allowed_ports[count.index], "protocol", "tcp")
port_range_min = lookup(var.bastion_allowed_ports[count.index], "port_range_min")
port_range_max = lookup(var.bastion_allowed_ports[count.index], "port_range_max")
remote_ip_prefix = lookup(var.bastion_allowed_ports[count.index], "remote_ip_prefix", "0.0.0.0/0")
security_group_id = openstack_networking_secgroup_v2.bastion[0].id
}
resource "openstack_networking_secgroup_v2" "k8s" {
name = "${var.cluster_name}-k8s"
description = "${var.cluster_name} - Kubernetes"
@ -130,36 +153,45 @@ resource "openstack_networking_secgroup_rule_v2" "worker" {
}
resource "openstack_compute_servergroup_v2" "k8s_master" {
count = "%{if var.use_server_groups}1%{else}0%{endif}"
count = var.master_server_group_policy != "" ? 1 : 0
name = "k8s-master-srvgrp"
policies = ["anti-affinity"]
policies = [var.master_server_group_policy]
}
resource "openstack_compute_servergroup_v2" "k8s_node" {
count = "%{if var.use_server_groups}1%{else}0%{endif}"
count = var.node_server_group_policy != "" ? 1 : 0
name = "k8s-node-srvgrp"
policies = ["anti-affinity"]
policies = [var.node_server_group_policy]
}
resource "openstack_compute_servergroup_v2" "k8s_etcd" {
count = "%{if var.use_server_groups}1%{else}0%{endif}"
count = var.etcd_server_group_policy != "" ? 1 : 0
name = "k8s-etcd-srvgrp"
policies = ["anti-affinity"]
policies = [var.etcd_server_group_policy]
}
locals {
# master groups
master_sec_groups = compact([
openstack_networking_secgroup_v2.k8s_master.name,
openstack_networking_secgroup_v2.k8s.name,
var.extra_sec_groups ?openstack_networking_secgroup_v2.k8s_master_extra[0].name : "",
openstack_networking_secgroup_v2.k8s_master.id,
openstack_networking_secgroup_v2.k8s.id,
var.extra_sec_groups ?openstack_networking_secgroup_v2.k8s_master_extra[0].id : "",
])
# worker groups
worker_sec_groups = compact([
openstack_networking_secgroup_v2.k8s.name,
openstack_networking_secgroup_v2.worker.name,
var.extra_sec_groups ? openstack_networking_secgroup_v2.worker_extra[0].name : "",
openstack_networking_secgroup_v2.k8s.id,
openstack_networking_secgroup_v2.worker.id,
var.extra_sec_groups ? openstack_networking_secgroup_v2.worker_extra[0].id : "",
])
# bastion groups
bastion_sec_groups = compact(concat([
openstack_networking_secgroup_v2.k8s.id,
openstack_networking_secgroup_v2.bastion[0].id,
]))
# etcd groups
etcd_sec_groups = compact([openstack_networking_secgroup_v2.k8s.id])
# glusterfs groups
gfs_sec_groups = compact([openstack_networking_secgroup_v2.k8s.id])
# Image uuid
image_to_use_node = var.image_uuid != "" ? var.image_uuid : data.openstack_images_image_v2.vm_image[0].id
@ -169,12 +201,30 @@ locals {
image_to_use_master = var.image_master_uuid != "" ? var.image_master_uuid : var.image_uuid != "" ? var.image_uuid : data.openstack_images_image_v2.image_master[0].id
}
resource "openstack_networking_port_v2" "bastion_port" {
count = var.number_of_bastions
name = "${var.cluster_name}-bastion-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.bastion_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
fixed_ip {
subnet_id = var.private_subnet_id
}
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "bastion" {
name = "${var.cluster_name}-bastion-${count.index + 1}"
count = var.number_of_bastions
image_id = var.bastion_root_volume_size_in_gb == 0 ? local.image_to_use_node : null
flavor_id = var.flavor_bastion
key_pair = openstack_compute_keypair_v2.k8s.name
user_data = data.cloudinit_config.cloudinit.rendered
dynamic "block_device" {
for_each = var.bastion_root_volume_size_in_gb > 0 ? [local.image_to_use_node] : []
@ -189,25 +239,38 @@ resource "openstack_compute_instance_v2" "bastion" {
}
network {
name = var.network_name
port = element(openstack_networking_port_v2.bastion_port.*.id, count.index)
}
security_groups = [openstack_networking_secgroup_v2.k8s.name,
element(openstack_networking_secgroup_v2.bastion.*.name, count.index),
]
metadata = {
ssh_user = var.ssh_user
kubespray_groups = "bastion"
depends_on = var.network_id
depends_on = var.network_router_id
use_access_ip = var.use_access_ip
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ ${path.root}/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${var.bastion_fips[0]}/ > ${var.group_vars_path}/no_floating.yml"
command = "sed -e s/USER/${var.ssh_user}/ -e s/BASTION_ADDRESS/${var.bastion_fips[0]}/ ${path.module}/ansible_bastion_template.txt > ${var.group_vars_path}/no_floating.yml"
}
}
resource "openstack_networking_port_v2" "k8s_master_port" {
count = var.number_of_k8s_masters
name = "${var.cluster_name}-k8s-master-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.master_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
fixed_ip {
subnet_id = var.private_subnet_id
}
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_master" {
name = "${var.cluster_name}-k8s-master-${count.index + 1}"
count = var.number_of_k8s_masters
@ -215,6 +278,7 @@ resource "openstack_compute_instance_v2" "k8s_master" {
image_id = var.master_root_volume_size_in_gb == 0 ? local.image_to_use_master : null
flavor_id = var.flavor_k8s_master
key_pair = openstack_compute_keypair_v2.k8s.name
user_data = data.cloudinit_config.cloudinit.rendered
dynamic "block_device" {
@ -231,13 +295,11 @@ resource "openstack_compute_instance_v2" "k8s_master" {
}
network {
name = var.network_name
port = element(openstack_networking_port_v2.k8s_master_port.*.id, count.index)
}
security_groups = local.master_sec_groups
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = openstack_compute_servergroup_v2.k8s_master[0].id
}
@ -246,15 +308,93 @@ resource "openstack_compute_instance_v2" "k8s_master" {
metadata = {
ssh_user = var.ssh_user
kubespray_groups = "etcd,kube_control_plane,${var.supplementary_master_groups},k8s_cluster"
depends_on = var.network_id
depends_on = var.network_router_id
use_access_ip = var.use_access_ip
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ ${path.root}/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > ${var.group_vars_path}/no_floating.yml"
command = "sed -e s/USER/${var.ssh_user}/ -e s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_master_fips), 0)}/ ${path.module}/ansible_bastion_template.txt > ${var.group_vars_path}/no_floating.yml"
}
}
resource "openstack_networking_port_v2" "k8s_masters_port" {
for_each = var.number_of_k8s_masters == 0 && var.number_of_k8s_masters_no_etcd == 0 && var.number_of_k8s_masters_no_floating_ip == 0 && var.number_of_k8s_masters_no_floating_ip_no_etcd == 0 ? var.k8s_masters : {}
name = "${var.cluster_name}-k8s-${each.key}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.master_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
fixed_ip {
subnet_id = var.private_subnet_id
}
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_masters" {
for_each = var.number_of_k8s_masters == 0 && var.number_of_k8s_masters_no_etcd == 0 && var.number_of_k8s_masters_no_floating_ip == 0 && var.number_of_k8s_masters_no_floating_ip_no_etcd == 0 ? var.k8s_masters : {}
name = "${var.cluster_name}-k8s-${each.key}"
availability_zone = each.value.az
image_id = var.master_root_volume_size_in_gb == 0 ? local.image_to_use_master : null
flavor_id = each.value.flavor
key_pair = openstack_compute_keypair_v2.k8s.name
dynamic "block_device" {
for_each = var.master_root_volume_size_in_gb > 0 ? [local.image_to_use_master] : []
content {
uuid = local.image_to_use_master
source_type = "image"
volume_size = var.master_root_volume_size_in_gb
volume_type = var.master_volume_type
boot_index = 0
destination_type = "volume"
delete_on_termination = true
}
}
network {
port = openstack_networking_port_v2.k8s_masters_port[each.key].id
}
dynamic "scheduler_hints" {
for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = openstack_compute_servergroup_v2.k8s_master[0].id
}
}
metadata = {
ssh_user = var.ssh_user
kubespray_groups = "%{if each.value.etcd == true}etcd,%{endif}kube_control_plane,${var.supplementary_master_groups},k8s_cluster%{if each.value.floating_ip == false},no_floating%{endif}"
depends_on = var.network_router_id
use_access_ip = var.use_access_ip
}
provisioner "local-exec" {
command = "%{if each.value.floating_ip}sed s/USER/${var.ssh_user}/ ${path.root}/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(concat(var.bastion_fips, [for key, value in var.k8s_masters_fips : value.address]), 0)}/ > ${var.group_vars_path}/no_floating.yml%{else}true%{endif}"
}
}
resource "openstack_networking_port_v2" "k8s_master_no_etcd_port" {
count = var.number_of_k8s_masters_no_etcd
name = "${var.cluster_name}-k8s-master-ne-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.master_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
fixed_ip {
subnet_id = var.private_subnet_id
}
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-${count.index + 1}"
count = var.number_of_k8s_masters_no_etcd
@ -262,6 +402,7 @@ resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
image_id = var.master_root_volume_size_in_gb == 0 ? local.image_to_use_master : null
flavor_id = var.flavor_k8s_master
key_pair = openstack_compute_keypair_v2.k8s.name
user_data = data.cloudinit_config.cloudinit.rendered
dynamic "block_device" {
@ -278,13 +419,11 @@ resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
}
network {
name = var.network_name
port = element(openstack_networking_port_v2.k8s_master_no_etcd_port.*.id, count.index)
}
security_groups = local.master_sec_groups
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = openstack_compute_servergroup_v2.k8s_master[0].id
}
@ -293,15 +432,32 @@ resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
metadata = {
ssh_user = var.ssh_user
kubespray_groups = "kube_control_plane,${var.supplementary_master_groups},k8s_cluster"
depends_on = var.network_id
depends_on = var.network_router_id
use_access_ip = var.use_access_ip
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ ${path.root}/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_master_fips), 0)}/ > ${var.group_vars_path}/no_floating.yml"
command = "sed -e s/USER/${var.ssh_user}/ -e s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_master_fips), 0)}/ ${path.module}/ansible_bastion_template.txt > ${var.group_vars_path}/no_floating.yml"
}
}
resource "openstack_networking_port_v2" "etcd_port" {
count = var.number_of_etcd
name = "${var.cluster_name}-etcd-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.etcd_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
fixed_ip {
subnet_id = var.private_subnet_id
}
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "etcd" {
name = "${var.cluster_name}-etcd-${count.index + 1}"
count = var.number_of_etcd
@ -309,6 +465,7 @@ resource "openstack_compute_instance_v2" "etcd" {
image_id = var.etcd_root_volume_size_in_gb == 0 ? local.image_to_use_master : null
flavor_id = var.flavor_etcd
key_pair = openstack_compute_keypair_v2.k8s.name
user_data = data.cloudinit_config.cloudinit.rendered
dynamic "block_device" {
for_each = var.etcd_root_volume_size_in_gb > 0 ? [local.image_to_use_master] : []
@ -323,13 +480,11 @@ resource "openstack_compute_instance_v2" "etcd" {
}
network {
name = var.network_name
port = element(openstack_networking_port_v2.etcd_port.*.id, count.index)
}
security_groups = [openstack_networking_secgroup_v2.k8s.name]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_etcd[0]] : []
for_each = var.etcd_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_etcd[0]] : []
content {
group = openstack_compute_servergroup_v2.k8s_etcd[0].id
}
@ -338,11 +493,28 @@ resource "openstack_compute_instance_v2" "etcd" {
metadata = {
ssh_user = var.ssh_user
kubespray_groups = "etcd,no_floating"
depends_on = var.network_id
depends_on = var.network_router_id
use_access_ip = var.use_access_ip
}
}
resource "openstack_networking_port_v2" "k8s_master_no_floating_ip_port" {
count = var.number_of_k8s_masters_no_floating_ip
name = "${var.cluster_name}-k8s-master-nf-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.master_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
fixed_ip {
subnet_id = var.private_subnet_id
}
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
name = "${var.cluster_name}-k8s-master-nf-${count.index + 1}"
count = var.number_of_k8s_masters_no_floating_ip
@ -365,13 +537,11 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
}
network {
name = var.network_name
port = element(openstack_networking_port_v2.k8s_master_no_floating_ip_port.*.id, count.index)
}
security_groups = local.master_sec_groups
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = openstack_compute_servergroup_v2.k8s_master[0].id
}
@ -380,11 +550,28 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
metadata = {
ssh_user = var.ssh_user
kubespray_groups = "etcd,kube_control_plane,${var.supplementary_master_groups},k8s_cluster,no_floating"
depends_on = var.network_id
depends_on = var.network_router_id
use_access_ip = var.use_access_ip
}
}
resource "openstack_networking_port_v2" "k8s_master_no_floating_ip_no_etcd_port" {
count = var.number_of_k8s_masters_no_floating_ip_no_etcd
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.master_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
fixed_ip {
subnet_id = var.private_subnet_id
}
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index + 1}"
count = var.number_of_k8s_masters_no_floating_ip_no_etcd
@ -392,6 +579,7 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
image_id = var.master_root_volume_size_in_gb == 0 ? local.image_to_use_master : null
flavor_id = var.flavor_k8s_master
key_pair = openstack_compute_keypair_v2.k8s.name
user_data = data.cloudinit_config.cloudinit.rendered
dynamic "block_device" {
for_each = var.master_root_volume_size_in_gb > 0 ? [local.image_to_use_master] : []
@ -407,13 +595,11 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
}
network {
name = var.network_name
port = element(openstack_networking_port_v2.k8s_master_no_floating_ip_no_etcd_port.*.id, count.index)
}
security_groups = local.master_sec_groups
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
content {
group = openstack_compute_servergroup_v2.k8s_master[0].id
}
@ -422,11 +608,28 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
metadata = {
ssh_user = var.ssh_user
kubespray_groups = "kube_control_plane,${var.supplementary_master_groups},k8s_cluster,no_floating"
depends_on = var.network_id
depends_on = var.network_router_id
use_access_ip = var.use_access_ip
}
}
resource "openstack_networking_port_v2" "k8s_node_port" {
count = var.number_of_k8s_nodes
name = "${var.cluster_name}-k8s-node-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.worker_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
fixed_ip {
subnet_id = var.private_subnet_id
}
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_node" {
name = "${var.cluster_name}-k8s-node-${count.index + 1}"
count = var.number_of_k8s_nodes
@ -434,6 +637,7 @@ resource "openstack_compute_instance_v2" "k8s_node" {
image_id = var.node_root_volume_size_in_gb == 0 ? local.image_to_use_node : null
flavor_id = var.flavor_k8s_node
key_pair = openstack_compute_keypair_v2.k8s.name
user_data = data.cloudinit_config.cloudinit.rendered
dynamic "block_device" {
for_each = var.node_root_volume_size_in_gb > 0 ? [local.image_to_use_node] : []
@ -441,6 +645,7 @@ resource "openstack_compute_instance_v2" "k8s_node" {
uuid = local.image_to_use_node
source_type = "image"
volume_size = var.node_root_volume_size_in_gb
volume_type = var.node_volume_type
boot_index = 0
destination_type = "volume"
delete_on_termination = true
@ -448,13 +653,12 @@ resource "openstack_compute_instance_v2" "k8s_node" {
}
network {
name = var.network_name
port = element(openstack_networking_port_v2.k8s_node_port.*.id, count.index)
}
security_groups = local.worker_sec_groups
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
for_each = var.node_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content {
group = openstack_compute_servergroup_v2.k8s_node[0].id
}
@ -463,15 +667,32 @@ resource "openstack_compute_instance_v2" "k8s_node" {
metadata = {
ssh_user = var.ssh_user
kubespray_groups = "kube_node,k8s_cluster,${var.supplementary_node_groups}"
depends_on = var.network_id
depends_on = var.network_router_id
use_access_ip = var.use_access_ip
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ ${path.root}/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_node_fips), 0)}/ > ${var.group_vars_path}/no_floating.yml"
command = "sed -e s/USER/${var.ssh_user}/ -e s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_node_fips), 0)}/ ${path.module}/ansible_bastion_template.txt > ${var.group_vars_path}/no_floating.yml"
}
}
resource "openstack_networking_port_v2" "k8s_node_no_floating_ip_port" {
count = var.number_of_k8s_nodes_no_floating_ip
name = "${var.cluster_name}-k8s-node-nf-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.worker_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
fixed_ip {
subnet_id = var.private_subnet_id
}
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
name = "${var.cluster_name}-k8s-node-nf-${count.index + 1}"
count = var.number_of_k8s_nodes_no_floating_ip
@ -479,6 +700,7 @@ resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
image_id = var.node_root_volume_size_in_gb == 0 ? local.image_to_use_node : null
flavor_id = var.flavor_k8s_node
key_pair = openstack_compute_keypair_v2.k8s.name
user_data = data.cloudinit_config.cloudinit.rendered
dynamic "block_device" {
for_each = var.node_root_volume_size_in_gb > 0 ? [local.image_to_use_node] : []
@ -486,6 +708,7 @@ resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
uuid = local.image_to_use_node
source_type = "image"
volume_size = var.node_root_volume_size_in_gb
volume_type = var.node_volume_type
boot_index = 0
destination_type = "volume"
delete_on_termination = true
@ -493,13 +716,11 @@ resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
}
network {
name = var.network_name
port = element(openstack_networking_port_v2.k8s_node_no_floating_ip_port.*.id, count.index)
}
security_groups = local.worker_sec_groups
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
for_each = var.node_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content {
group = openstack_compute_servergroup_v2.k8s_node[0].id
}
@ -508,11 +729,28 @@ resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
metadata = {
ssh_user = var.ssh_user
kubespray_groups = "kube_node,k8s_cluster,no_floating,${var.supplementary_node_groups}"
depends_on = var.network_id
depends_on = var.network_router_id
use_access_ip = var.use_access_ip
}
}
resource "openstack_networking_port_v2" "k8s_nodes_port" {
for_each = var.number_of_k8s_nodes == 0 && var.number_of_k8s_nodes_no_floating_ip == 0 ? var.k8s_nodes : {}
name = "${var.cluster_name}-k8s-node-${each.key}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.worker_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
fixed_ip {
subnet_id = var.private_subnet_id
}
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "k8s_nodes" {
for_each = var.number_of_k8s_nodes == 0 && var.number_of_k8s_nodes_no_floating_ip == 0 ? var.k8s_nodes : {}
name = "${var.cluster_name}-k8s-node-${each.key}"
@ -520,6 +758,7 @@ resource "openstack_compute_instance_v2" "k8s_nodes" {
image_id = var.node_root_volume_size_in_gb == 0 ? local.image_to_use_node : null
flavor_id = each.value.flavor
key_pair = openstack_compute_keypair_v2.k8s.name
user_data = data.cloudinit_config.cloudinit.rendered
dynamic "block_device" {
for_each = var.node_root_volume_size_in_gb > 0 ? [local.image_to_use_node] : []
@ -527,6 +766,7 @@ resource "openstack_compute_instance_v2" "k8s_nodes" {
uuid = local.image_to_use_node
source_type = "image"
volume_size = var.node_root_volume_size_in_gb
volume_type = var.node_volume_type
boot_index = 0
destination_type = "volume"
delete_on_termination = true
@ -534,13 +774,11 @@ resource "openstack_compute_instance_v2" "k8s_nodes" {
}
network {
name = var.network_name
port = openstack_networking_port_v2.k8s_nodes_port[each.key].id
}
security_groups = local.worker_sec_groups
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
for_each = var.node_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content {
group = openstack_compute_servergroup_v2.k8s_node[0].id
}
@ -548,16 +786,33 @@ resource "openstack_compute_instance_v2" "k8s_nodes" {
metadata = {
ssh_user = var.ssh_user
kubespray_groups = "kube_node,k8s_cluster,%{if each.value.floating_ip == false}no_floating,%{endif}${var.supplementary_node_groups}"
depends_on = var.network_id
kubespray_groups = "kube_node,k8s_cluster,%{if each.value.floating_ip == false}no_floating,%{endif}${var.supplementary_node_groups},${try(each.value.extra_groups, "")}"
depends_on = var.network_router_id
use_access_ip = var.use_access_ip
}
provisioner "local-exec" {
command = "%{if each.value.floating_ip}sed s/USER/${var.ssh_user}/ ${path.root}/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(concat(var.bastion_fips, [for key, value in var.k8s_nodes_fips : value.address]), 0)}/ > ${var.group_vars_path}/no_floating.yml%{else}true%{endif}"
command = "%{if each.value.floating_ip}sed -e s/USER/${var.ssh_user}/ -e s/BASTION_ADDRESS/${element(concat(var.bastion_fips, [for key, value in var.k8s_nodes_fips : value.address]), 0)}/ ${path.module}/ansible_bastion_template.txt > ${var.group_vars_path}/no_floating.yml%{else}true%{endif}"
}
}
resource "openstack_networking_port_v2" "glusterfs_node_no_floating_ip_port" {
count = var.number_of_gfs_nodes_no_floating_ip
name = "${var.cluster_name}-gfs-node-nf-${count.index + 1}"
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
admin_state_up = "true"
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
security_group_ids = var.port_security_enabled ? local.gfs_sec_groups : null
no_security_groups = var.port_security_enabled ? null : false
fixed_ip {
subnet_id = var.private_subnet_id
}
depends_on = [
var.network_router_id
]
}
resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
name = "${var.cluster_name}-gfs-node-nf-${count.index + 1}"
count = var.number_of_gfs_nodes_no_floating_ip
@ -579,13 +834,11 @@ resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
}
network {
name = var.network_name
port = element(openstack_networking_port_v2.glusterfs_node_no_floating_ip_port.*.id, count.index)
}
security_groups = [openstack_networking_secgroup_v2.k8s.name]
dynamic "scheduler_hints" {
for_each = var.use_server_groups ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
for_each = var.node_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
content {
group = openstack_compute_servergroup_v2.k8s_node[0].id
}
@ -594,44 +847,46 @@ resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
metadata = {
ssh_user = var.ssh_user_gfs
kubespray_groups = "gfs-cluster,network-storage,no_floating"
depends_on = var.network_id
depends_on = var.network_router_id
use_access_ip = var.use_access_ip
}
}
resource "openstack_compute_floatingip_associate_v2" "bastion" {
resource "openstack_networking_floatingip_associate_v2" "bastion" {
count = var.number_of_bastions
floating_ip = var.bastion_fips[count.index]
instance_id = element(openstack_compute_instance_v2.bastion.*.id, count.index)
wait_until_associated = var.wait_for_floatingip
port_id = element(openstack_networking_port_v2.bastion_port.*.id, count.index)
}
resource "openstack_compute_floatingip_associate_v2" "k8s_master" {
resource "openstack_networking_floatingip_associate_v2" "k8s_master" {
count = var.number_of_k8s_masters
instance_id = element(openstack_compute_instance_v2.k8s_master.*.id, count.index)
floating_ip = var.k8s_master_fips[count.index]
wait_until_associated = var.wait_for_floatingip
port_id = element(openstack_networking_port_v2.k8s_master_port.*.id, count.index)
}
resource "openstack_compute_floatingip_associate_v2" "k8s_master_no_etcd" {
count = var.master_root_volume_size_in_gb == 0 ? var.number_of_k8s_masters_no_etcd : 0
instance_id = element(openstack_compute_instance_v2.k8s_master_no_etcd.*.id, count.index)
floating_ip = var.k8s_master_no_etcd_fips[count.index]
resource "openstack_networking_floatingip_associate_v2" "k8s_masters" {
for_each = var.number_of_k8s_masters == 0 && var.number_of_k8s_masters_no_etcd == 0 && var.number_of_k8s_masters_no_floating_ip == 0 && var.number_of_k8s_masters_no_floating_ip_no_etcd == 0 ? { for key, value in var.k8s_masters : key => value if value.floating_ip } : {}
floating_ip = var.k8s_masters_fips[each.key].address
port_id = openstack_networking_port_v2.k8s_masters_port[each.key].id
}
resource "openstack_compute_floatingip_associate_v2" "k8s_node" {
resource "openstack_networking_floatingip_associate_v2" "k8s_master_no_etcd" {
count = var.master_root_volume_size_in_gb == 0 ? var.number_of_k8s_masters_no_etcd : 0
floating_ip = var.k8s_master_no_etcd_fips[count.index]
port_id = element(openstack_networking_port_v2.k8s_master_no_etcd_port.*.id, count.index)
}
resource "openstack_networking_floatingip_associate_v2" "k8s_node" {
count = var.node_root_volume_size_in_gb == 0 ? var.number_of_k8s_nodes : 0
floating_ip = var.k8s_node_fips[count.index]
instance_id = element(openstack_compute_instance_v2.k8s_node[*].id, count.index)
wait_until_associated = var.wait_for_floatingip
port_id = element(openstack_networking_port_v2.k8s_node_port.*.id, count.index)
}
resource "openstack_compute_floatingip_associate_v2" "k8s_nodes" {
resource "openstack_networking_floatingip_associate_v2" "k8s_nodes" {
for_each = var.number_of_k8s_nodes == 0 && var.number_of_k8s_nodes_no_floating_ip == 0 ? { for key, value in var.k8s_nodes : key => value if value.floating_ip } : {}
floating_ip = var.k8s_nodes_fips[each.key].address
instance_id = openstack_compute_instance_v2.k8s_nodes[each.key].id
wait_until_associated = var.wait_for_floatingip
port_id = openstack_networking_port_v2.k8s_nodes_port[each.key].id
}
resource "openstack_blockstorage_volume_v2" "glusterfs_volume" {

View file

@ -0,0 +1,17 @@
# yamllint disable rule:comments
#cloud-config
## in some cases novnc console access is required
## it requires ssh password to be set
#ssh_pwauth: yes
#chpasswd:
# list: |
# root:secret
# expire: False
## in some cases direct root ssh access via ssh key is required
#disable_root: false
## in some cases additional CA certs are required
#ca-certs:
# trusted: |
# -----BEGIN CERTIFICATE-----

View file

@ -40,6 +40,8 @@ variable "gfs_volume_size_in_gb" {}
variable "master_volume_type" {}
variable "node_volume_type" {}
variable "public_key_path" {}
variable "image" {}
@ -66,6 +68,14 @@ variable "network_id" {
default = ""
}
variable "use_existing_network" {
type = bool
}
variable "network_router_id" {
default = ""
}
variable "k8s_master_fips" {
type = list
}
@ -78,6 +88,10 @@ variable "k8s_node_fips" {
type = list
}
variable "k8s_masters_fips" {
type = map
}
variable "k8s_nodes_fips" {
type = map
}
@ -102,9 +116,9 @@ variable "k8s_allowed_egress_ips" {
type = list
}
variable "k8s_nodes" {}
variable "k8s_masters" {}
variable "wait_for_floatingip" {}
variable "k8s_nodes" {}
variable "supplementary_master_groups" {
default = ""
@ -122,10 +136,22 @@ variable "worker_allowed_ports" {
type = list
}
variable "bastion_allowed_ports" {
type = list
}
variable "use_access_ip" {}
variable "use_server_groups" {
type = bool
variable "master_server_group_policy" {
type = string
}
variable "node_server_group_policy" {
type = string
}
variable "etcd_server_group_policy" {
type = string
}
variable "extra_sec_groups" {
@ -155,3 +181,15 @@ variable "image_master_uuid" {
variable "group_vars_path" {
type = string
}
variable "port_security_enabled" {
type = bool
}
variable "force_null_port_security" {
type = bool
}
variable "private_subnet_id" {
type = string
}

View file

@ -14,6 +14,12 @@ resource "openstack_networking_floatingip_v2" "k8s_master" {
depends_on = [null_resource.dummy_dependency]
}
resource "openstack_networking_floatingip_v2" "k8s_masters" {
for_each = var.number_of_k8s_masters == 0 && var.number_of_k8s_masters_no_etcd == 0 ? { for key, value in var.k8s_masters : key => value if value.floating_ip } : {}
pool = var.floatingip_pool
depends_on = [null_resource.dummy_dependency]
}
# If user specifies pre-existing IPs to use in k8s_master_fips, do not create new ones.
resource "openstack_networking_floatingip_v2" "k8s_master_no_etcd" {
count = length(var.k8s_master_fips) > 0 ? 0 : var.number_of_k8s_masters_no_etcd
@ -28,7 +34,7 @@ resource "openstack_networking_floatingip_v2" "k8s_node" {
}
resource "openstack_networking_floatingip_v2" "bastion" {
count = var.number_of_bastions
count = length(var.bastion_fips) > 0 ? 0 : var.number_of_bastions
pool = var.floatingip_pool
depends_on = [null_resource.dummy_dependency]
}

View file

@ -3,6 +3,10 @@ output "k8s_master_fips" {
value = length(var.k8s_master_fips) > 0 ? var.k8s_master_fips : openstack_networking_floatingip_v2.k8s_master[*].address
}
output "k8s_masters_fips" {
value = openstack_networking_floatingip_v2.k8s_masters
}
# If k8s_master_fips is already defined as input, keep the same value since new FIPs have not been created.
output "k8s_master_no_etcd_fips" {
value = length(var.k8s_master_fips) > 0 ? var.k8s_master_fips : openstack_networking_floatingip_v2.k8s_master_no_etcd[*].address
@ -17,5 +21,5 @@ output "k8s_nodes_fips" {
}
output "bastion_fips" {
value = openstack_networking_floatingip_v2.bastion[*].address
value = length(var.bastion_fips) > 0 ? var.bastion_fips : openstack_networking_floatingip_v2.bastion[*].address
}

View file

@ -16,8 +16,12 @@ variable "router_id" {
default = ""
}
variable "k8s_masters" {}
variable "k8s_nodes" {}
variable "k8s_master_fips" {}
variable "bastion_fips" {}
variable "router_internal_port_id" {}

View file

@ -11,10 +11,11 @@ data "openstack_networking_router_v2" "k8s" {
}
resource "openstack_networking_network_v2" "k8s" {
name = var.network_name
count = var.use_neutron
dns_domain = var.network_dns_domain != null ? var.network_dns_domain : null
admin_state_up = "true"
name = var.network_name
count = var.use_neutron
dns_domain = var.network_dns_domain != null ? var.network_dns_domain : null
admin_state_up = "true"
port_security_enabled = var.port_security_enabled
}
resource "openstack_networking_subnet_v2" "k8s" {

View file

@ -2,6 +2,10 @@ output "router_id" {
value = "%{if var.use_neutron == 1} ${var.router_id == null ? element(concat(openstack_networking_router_v2.k8s.*.id, [""]), 0) : var.router_id} %{else} %{endif}"
}
output "network_id" {
value = element(concat(openstack_networking_network_v2.k8s.*.id, [""]),0)
}
output "router_internal_port_id" {
value = element(concat(openstack_networking_router_interface_v2.k8s.*.id, [""]), 0)
}

View file

@ -10,6 +10,10 @@ variable "dns_nameservers" {
type = list
}
variable "port_security_enabled" {
type = bool
}
variable "subnet_cidr" {}
variable "use_neutron" {}

View file

@ -32,6 +32,28 @@ number_of_k8s_masters_no_floating_ip_no_etcd = 0
flavor_k8s_master = "<UUID>"
k8s_masters = {
# "master-1" = {
# "az" = "nova"
# "flavor" = "<UUID>"
# "floating_ip" = true
# "etcd" = true
# },
# "master-2" = {
# "az" = "nova"
# "flavor" = "<UUID>"
# "floating_ip" = false
# "etcd" = true
# },
# "master-3" = {
# "az" = "nova"
# "flavor" = "<UUID>"
# "floating_ip" = true
# "etcd" = true
# },
}
# nodes
number_of_k8s_nodes = 2
@ -52,6 +74,9 @@ number_of_k8s_nodes_no_floating_ip = 4
# networking
network_name = "<network>"
# Use a existing network with the name of network_name. Set to false to create a network with name of network_name.
# use_existing_network = true
external_net = "<UUID>"
subnet_cidr = "<cidr>"
@ -59,3 +84,6 @@ subnet_cidr = "<cidr>"
floatingip_pool = "<pool>"
bastion_allowed_remote_ips = ["0.0.0.0/0"]
# Force port security to be null. Some cloud providers do not allow to set port security.
# force_null_port_security = false

View file

@ -78,6 +78,10 @@ variable "master_volume_type" {
default = "Default"
}
variable "node_volume_type" {
default = "Default"
}
variable "public_key_path" {
description = "The path of the ssh pub key"
default = "~/.ssh/id_rsa.pub"
@ -133,6 +137,12 @@ variable "network_name" {
default = "internal"
}
variable "use_existing_network" {
description = "Use an existing network"
type = bool
default = "false"
}
variable "network_dns_domain" {
description = "dns_domain for the internal network"
type = string
@ -144,6 +154,18 @@ variable "use_neutron" {
default = 1
}
variable "port_security_enabled" {
description = "Enable port security on the internal network"
type = bool
default = "true"
}
variable "force_null_port_security" {
description = "Force port security to be null. Some providers does not allow setting port security"
type = bool
default = "false"
}
variable "subnet_cidr" {
description = "Subnet CIDR block."
type = string
@ -162,6 +184,12 @@ variable "k8s_master_fips" {
default = []
}
variable "bastion_fips" {
description = "specific pre-existing floating IPs to use for bastion node"
type = list(string)
default = []
}
variable "floatingip_pool" {
description = "name of the floating ip pool to use"
default = "external"
@ -229,12 +257,29 @@ variable "worker_allowed_ports" {
]
}
variable "bastion_allowed_ports" {
type = list(any)
default = []
}
variable "use_access_ip" {
default = 1
}
variable "use_server_groups" {
default = false
variable "master_server_group_policy" {
description = "desired server group policy, e.g. anti-affinity"
default = ""
}
variable "node_server_group_policy" {
description = "desired server group policy, e.g. anti-affinity"
default = ""
}
variable "etcd_server_group_policy" {
description = "desired server group policy, e.g. anti-affinity"
default = ""
}
variable "router_id" {
@ -247,6 +292,10 @@ variable "router_internal_port_id" {
default = null
}
variable "k8s_masters" {
default = {}
}
variable "k8s_nodes" {
default = {}
}

View file

@ -1,16 +0,0 @@
output "k8s_masters" {
value = packet_device.k8s_master.*.access_public_ipv4
}
output "k8s_masters_no_etc" {
value = packet_device.k8s_master_no_etcd.*.access_public_ipv4
}
output "k8s_etcds" {
value = packet_device.k8s_etcd.*.access_public_ipv4
}
output "k8s_nodes" {
value = packet_device.k8s_node.*.access_public_ipv4
}

View file

@ -114,10 +114,10 @@ def iterhosts(resources):
def iterips(resources):
'''yield ip tuples of (instance_id, ip)'''
'''yield ip tuples of (port_id, ip)'''
for module_name, key, resource in resources:
resource_type, name = key.split('.', 1)
if resource_type == 'openstack_compute_floatingip_associate_v2':
if resource_type == 'openstack_networking_floatingip_associate_v2':
yield openstack_floating_ips(resource)
@ -195,8 +195,8 @@ def parse_bool(string_form):
raise ValueError('could not convert %r to a bool' % string_form)
@parses('packet_device')
def packet_device(resource, tfvars=None):
@parses('metal_device')
def metal_device(resource, tfvars=None):
raw_attrs = resource['primary']['attributes']
name = raw_attrs['hostname']
groups = []
@ -212,15 +212,15 @@ def packet_device(resource, tfvars=None):
'project_id': raw_attrs['project_id'],
'state': raw_attrs['state'],
# ansible
'ansible_ssh_host': raw_attrs['network.0.address'],
'ansible_ssh_user': 'root', # Use root by default in packet
'ansible_host': raw_attrs['network.0.address'],
'ansible_ssh_user': 'root', # Use root by default in metal
# generic
'ipv4_address': raw_attrs['network.0.address'],
'public_ipv4': raw_attrs['network.0.address'],
'ipv6_address': raw_attrs['network.1.address'],
'public_ipv6': raw_attrs['network.1.address'],
'private_ipv4': raw_attrs['network.2.address'],
'provider': 'packet',
'provider': 'metal',
}
if raw_attrs['operating_system'] == 'flatcar_stable':
@ -228,10 +228,10 @@ def packet_device(resource, tfvars=None):
attrs.update({'ansible_ssh_user': 'core'})
# add groups based on attrs
groups.append('packet_operating_system=' + attrs['operating_system'])
groups.append('packet_locked=%s' % attrs['locked'])
groups.append('packet_state=' + attrs['state'])
groups.append('packet_plan=' + attrs['plan'])
groups.append('metal_operating_system=' + attrs['operating_system'])
groups.append('metal_locked=%s' % attrs['locked'])
groups.append('metal_state=' + attrs['state'])
groups.append('metal_plan=' + attrs['plan'])
# groups specific to kubespray
groups = groups + attrs['tags']
@ -243,13 +243,13 @@ def openstack_floating_ips(resource):
raw_attrs = resource['primary']['attributes']
attrs = {
'ip': raw_attrs['floating_ip'],
'instance_id': raw_attrs['instance_id'],
'port_id': raw_attrs['port_id'],
}
return attrs
def openstack_floating_ips(resource):
raw_attrs = resource['primary']['attributes']
return raw_attrs['instance_id'], raw_attrs['floating_ip']
return raw_attrs['port_id'], raw_attrs['floating_ip']
@parses('openstack_compute_instance_v2')
@calculate_mantl_vars
@ -282,6 +282,7 @@ def openstack_host(resource, module_name):
# generic
'public_ipv4': raw_attrs['access_ip_v4'],
'private_ipv4': raw_attrs['access_ip_v4'],
'port_id' : raw_attrs['network.0.port'],
'provider': 'openstack',
}
@ -291,16 +292,16 @@ def openstack_host(resource, module_name):
try:
if 'metadata.prefer_ipv6' in raw_attrs and raw_attrs['metadata.prefer_ipv6'] == "1":
attrs.update({
'ansible_ssh_host': re.sub("[\[\]]", "", raw_attrs['access_ip_v6']),
'ansible_host': re.sub("[\[\]]", "", raw_attrs['access_ip_v6']),
'publicly_routable': True,
})
else:
attrs.update({
'ansible_ssh_host': raw_attrs['access_ip_v4'],
'ansible_host': raw_attrs['access_ip_v4'],
'publicly_routable': True,
})
except (KeyError, ValueError):
attrs.update({'ansible_ssh_host': '', 'publicly_routable': False})
attrs.update({'ansible_host': '', 'publicly_routable': False})
# Handling of floating IPs has changed: https://github.com/terraform-providers/terraform-provider-openstack/blob/master/CHANGELOG.md#010-june-21-2017
@ -339,16 +340,16 @@ def openstack_host(resource, module_name):
def iter_host_ips(hosts, ips):
'''Update hosts that have an entry in the floating IP list'''
for host in hosts:
host_id = host[1]['id']
port_id = host[1]['port_id']
if host_id in ips:
ip = ips[host_id]
if port_id in ips:
ip = ips[port_id]
host[1].update({
'access_ip_v4': ip,
'access_ip': ip,
'public_ipv4': ip,
'ansible_ssh_host': ip,
'ansible_host': ip,
})
if 'use_access_ip' in host[1]['metadata'] and host[1]['metadata']['use_access_ip'] == "0":
@ -388,7 +389,7 @@ def query_list(hosts):
def query_hostfile(hosts):
out = ['## begin hosts generated by terraform.py ##']
out.extend(
'{}\t{}'.format(attrs['ansible_ssh_host'].ljust(16), name)
'{}\t{}'.format(attrs['ansible_host'].ljust(16), name)
for name, attrs, _ in hosts
)

View file

@ -104,9 +104,36 @@ terraform destroy --var-file cluster-settings.tfvars \
* `zone`: The zone where to run the cluster
* `machines`: Machines to provision. Key of this object will be used as the name of the machine
* `node_type`: The role of this node *(master|worker)*
* `plan`: Preconfigured cpu/mem plan to use (disables `cpu` and `mem` attributes below)
* `cpu`: number of cpu cores
* `mem`: memory size in MB
* `disk_size`: The size of the storage in GB
* `additional_disks`: Additional disks to attach to the node.
* `size`: The size of the additional disk in GB
* `tier`: The tier of disk to use (`maxiops` is the only one you can choose atm)
* `firewall_enabled`: Enable firewall rules
* `firewall_default_deny_in`: Set the firewall to deny inbound traffic by default. Automatically adds UpCloud DNS server and NTP port allowlisting.
* `firewall_default_deny_out`: Set the firewall to deny outbound traffic by default.
* `master_allowed_remote_ips`: List of IP ranges that should be allowed to access API of masters
* `start_address`: Start of address range to allow
* `end_address`: End of address range to allow
* `k8s_allowed_remote_ips`: List of IP ranges that should be allowed SSH access to all nodes
* `start_address`: Start of address range to allow
* `end_address`: End of address range to allow
* `master_allowed_ports`: List of port ranges that should be allowed to access the masters
* `protocol`: Protocol *(tcp|udp|icmp)*
* `port_range_min`: Start of port range to allow
* `port_range_max`: End of port range to allow
* `start_address`: Start of address range to allow
* `end_address`: End of address range to allow
* `worker_allowed_ports`: List of port ranges that should be allowed to access the workers
* `protocol`: Protocol *(tcp|udp|icmp)*
* `port_range_min`: Start of port range to allow
* `port_range_max`: End of port range to allow
* `start_address`: Start of address range to allow
* `end_address`: End of address range to allow
* `loadbalancer_enabled`: Enable managed load balancer
* `loadbalancer_plan`: Plan to use for load balancer *(development|production-small)*
* `loadbalancers`: Ports to load balance and which machines to forward to. Key of this object will be used as the name of the load balancer frontends/backends
* `port`: Port to load balance.
* `backend_servers`: List of servers that traffic to the port should be forwarded to.

View file

@ -20,6 +20,8 @@ ssh_public_keys = [
machines = {
"master-0" : {
"node_type" : "master",
# plan to use instead of custom cpu/mem
"plan" : null,
#number of cpu cores
"cpu" : "2",
#memory size in MB
@ -30,6 +32,8 @@ machines = {
},
"worker-0" : {
"node_type" : "worker",
# plan to use instead of custom cpu/mem
"plan" : null,
#number of cpu cores
"cpu" : "2",
#memory size in MB
@ -49,6 +53,8 @@ machines = {
},
"worker-1" : {
"node_type" : "worker",
# plan to use instead of custom cpu/mem
"plan" : null,
#number of cpu cores
"cpu" : "2",
#memory size in MB
@ -68,6 +74,8 @@ machines = {
},
"worker-2" : {
"node_type" : "worker",
# plan to use instead of custom cpu/mem
"plan" : null,
#number of cpu cores
"cpu" : "2",
#memory size in MB
@ -86,3 +94,37 @@ machines = {
}
}
}
firewall_enabled = false
firewall_default_deny_in = false
firewall_default_deny_out = false
master_allowed_remote_ips = [
{
"start_address" : "0.0.0.0"
"end_address" : "255.255.255.255"
}
]
k8s_allowed_remote_ips = [
{
"start_address" : "0.0.0.0"
"end_address" : "255.255.255.255"
}
]
master_allowed_ports = []
worker_allowed_ports = []
loadbalancer_enabled = false
loadbalancer_plan = "development"
loadbalancers = {
# "http" : {
# "port" : 80,
# "backend_servers" : [
# "worker-0",
# "worker-1",
# "worker-2"
# ]
# }
}

Some files were not shown because too many files have changed in this diff Show more