by setting a default runtime spec with a patch for RLIMIT_NOFILE.
- Introduces containerd_base_runtime_spec_rlimit_nofile.
- Generates base_runtime_spec on-the-fly, to use the containerd version
of the node.
For the following configuration
```
containerd_insecure_registries:
docker.io:
- dockerhubcache.example.com
```
the rendered /etc/containerd/config.toml contains
```
[plugins."io.containerd.grpc.v1.cri".registry.configs."docker.io".tls]
insecure_skip_verify = true
```
but it needs to be
```
[plugins."io.containerd.grpc.v1.cri".registry.configs."dockerhubcache.example.com".tls]
insecure_skip_verify = true
```
* Update main.yaml
* remove version in dpkg_selection name
* make lint happy
* Fix typo
* add comment / remove useless contition
* remove dpkg hold in reset tasks
* feat: make kubernetes owner parametrized
* docs: update hardening guide with configuration for CIS 1.1.19
* fix: set etcd data directory permissions to be compliant to CIS 1.1.12
* extra admission controls now don't have a version in their file names
eventratelimit.v1beta2.yaml.j2 -> eventratelimit.yaml.j2
* cri_socket variable includes the unix:// prefix to be conformat with
upstream
When running molecule jobs, we saw the folloing warning message:
[DEPRECATION WARNING]: [defaults]callback_whitelist option, normalizing names
to new standard, use callbacks_enabled instead. This feature will be removed
from ansible-core in version 2.15. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.
callbacks_enabled has been added since Ansible 2.11 and Kubespray is using
Ansible 2.12 at master branch. So we can use callbacks_enabled safely to
avoid the warning message.
* Force containerd service unmasking
Force systemd to unmask and start service when adding containerd service
* Eliminate restart and move unmasking step
Switch to start instead of restart
Move unmasking to restart handler
* Add unmasking to similar container runtimes
* Add missing service names
When running cluster.yml for new machines what containerd is already
install but Kubernetes cluster were not installed before, the task
"remove-node | List nodes" is failed like
"changed": false,
"cmd": [
"/usr/local/bin/kubectl", "--kubeconfig",
"/etc/kubernetes/admin.conf", "get", "nodes", "-o",
"go-template={{ range .items }}{{ .metadata.name }}
{{ "\n" }}{{ end }}"
],
..
"stderr": "error: stat /etc/kubernetes/admin.conf: no such file or directory",
That was due to lack to check the existing Kubernetes cluster exists
or not before running "kubectl drain" command.
This adds the check to avoid the issue.