[calico] don't enable ipip encapsulation by default and use vxlan in CI (#8434)

* [calico] make vxlan encapsulation the default

* don't enable ipip encapsulation by default
* set calico_network_backend by default to vxlan
* update sample inventory and documentation

* [CI] pin default calico parameters for upgrade tests to ensure proper upgrade

* [CI] improve netchecker connectivity testing

* [CI] show logs for tests

* [calico] tweak task name

* [CI] Don't run the provisioner from vagrant since we run it in testcases_run.sh

* [CI] move kube-router tests to vagrant to avoid network connectivity issues during netchecker check

* service proxy mode still fails connectivity tests so keeping it manual mode

* [kube-router] account for containerd use-case
This commit is contained in:
Cristian Calin 2022-03-18 03:05:39 +02:00 committed by GitHub
parent a86d9bd8e8
commit dd2d95ecdf
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
26 changed files with 229 additions and 82 deletions

View file

@ -100,16 +100,6 @@ packet_ubuntu16-flannel-ha:
extends: .packet_pr extends: .packet_pr
when: manual when: manual
packet_ubuntu16-kube-router-sep:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_ubuntu16-kube-router-svc-proxy:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_debian10-cilium-svc-proxy: packet_debian10-cilium-svc-proxy:
stage: deploy-part2 stage: deploy-part2
extends: .packet_periodic extends: .packet_periodic
@ -165,11 +155,6 @@ packet_fedora34-docker-weave:
extends: .packet_pr extends: .packet_pr
when: on_success when: on_success
packet_fedora35-kube-router:
stage: deploy-part2
extends: .packet_pr
when: on_success
packet_opensuse-canal: packet_opensuse-canal:
stage: deploy-part2 stage: deploy-part2
extends: .packet_periodic extends: .packet_periodic
@ -218,11 +203,6 @@ packet_centos7-calico-ha:
extends: .packet_pr extends: .packet_pr
when: manual when: manual
packet_centos7-kube-router:
stage: deploy-part2
extends: .packet_pr
when: manual
packet_centos7-multus-calico: packet_centos7-multus-calico:
stage: deploy-part2 stage: deploy-part2
extends: .packet_pr extends: .packet_pr

View file

@ -66,3 +66,24 @@ vagrant_ubuntu20-flannel:
stage: deploy-part2 stage: deploy-part2
extends: .vagrant extends: .vagrant
when: on_success when: on_success
vagrant_ubuntu16-kube-router-sep:
stage: deploy-part2
extends: .vagrant
when: manual
# Service proxy test fails connectivity testing
vagrant_ubuntu16-kube-router-svc-proxy:
stage: deploy-part2
extends: .vagrant
when: manual
vagrant_fedora35-kube-router:
stage: deploy-part2
extends: .vagrant
when: on_success
vagrant_centos7-kube-router:
stage: deploy-part2
extends: .vagrant
when: manual

3
Vagrantfile vendored
View file

@ -240,6 +240,7 @@ Vagrant.configure("2") do |config|
} }
# Only execute the Ansible provisioner once, when all the machines are up and ready. # Only execute the Ansible provisioner once, when all the machines are up and ready.
# And limit the action to gathering facts, the full playbook is going to be ran by testcases_run.sh
if i == $num_instances if i == $num_instances
node.vm.provision "ansible" do |ansible| node.vm.provision "ansible" do |ansible|
ansible.playbook = $playbook ansible.playbook = $playbook
@ -252,7 +253,7 @@ Vagrant.configure("2") do |config|
ansible.host_key_checking = false ansible.host_key_checking = false
ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "-e ansible_become_pass=vagrant"] ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "-e ansible_become_pass=vagrant"]
ansible.host_vars = host_vars ansible.host_vars = host_vars
#ansible.tags = ['download'] ansible.tags = ['facts']
ansible.groups = { ansible.groups = {
"etcd" => ["#{$instance_name_prefix}-[1:#{$etcd_instances}]"], "etcd" => ["#{$instance_name_prefix}-[1:#{$etcd_instances}]"],
"kube_control_plane" => ["#{$instance_name_prefix}-[1:#{$kube_master_instances}]"], "kube_control_plane" => ["#{$instance_name_prefix}-[1:#{$kube_master_instances}]"],

View file

@ -210,23 +210,42 @@ calico_node_readinessprobe_timeout: 10
## Config encapsulation for cross server traffic ## Config encapsulation for cross server traffic
Calico supports two types of encapsulation: [VXLAN and IP in IP](https://docs.projectcalico.org/v3.11/networking/vxlan-ipip). VXLAN is supported in some environments where IP in IP is not (for example, Azure). Calico supports two types of encapsulation: [VXLAN and IP in IP](https://docs.projectcalico.org/v3.11/networking/vxlan-ipip). VXLAN is the more mature implementation and enabled by default, please check your environment if you need *IP in IP* encapsulation.
*IP in IP* and *VXLAN* is mutualy exclusive modes. *IP in IP* and *VXLAN* is mutualy exclusive modes.
Configure Ip in Ip mode. Possible values is `Always`, `CrossSubnet`, `Never`. ### IP in IP mode
```yml To configure Ip in Ip mode you need to use the bird network backend.
calico_ipip_mode: 'Always'
```
Configure VXLAN mode. Possible values is `Always`, `CrossSubnet`, `Never`.
```yml ```yml
calico_ipip_mode: 'Always' # Possible values is `Always`, `CrossSubnet`, `Never`
calico_vxlan_mode: 'Never' calico_vxlan_mode: 'Never'
calico_network_backend: 'bird'
``` ```
If you use VXLAN mode, BGP networking is not required. You can disable BGP to reduce the moving parts in your cluster by `calico_network_backend: vxlan` ### VXLAN mode (default)
To configure VXLAN mode you can use the default settings, the example below is provided for your reference.
```yml
calico_ipip_mode: 'Never'
calico_vxlan_mode: 'Always' # Possible values is `Always`, `CrossSubnet`, `Never`.
calico_network_backend: 'vxlan'
```
In VXLAN mode BGP networking is not required.
We disable BGP to reduce the moving parts in your cluster by `calico_network_backend: vxlan`
### BGP mode
To enable BGP no-encapsulation mode:
```yml
calico_ipip_mode: 'Never'
calico_vxlan_mode: 'Never'
calico_network_backend: 'bird'
```
## Configuring interface MTU ## Configuring interface MTU

View file

@ -61,12 +61,12 @@ gcloud compute networks subnets create kubernetes \
#### Firewall Rules #### Firewall Rules
Create a firewall rule that allows internal communication across all protocols. Create a firewall rule that allows internal communication across all protocols.
It is important to note that the ipip protocol has to be allowed in order for It is important to note that the vxlan protocol has to be allowed in order for
the calico (see later) networking plugin to work. the calico (see later) networking plugin to work.
```ShellSession ```ShellSession
gcloud compute firewall-rules create kubernetes-the-kubespray-way-allow-internal \ gcloud compute firewall-rules create kubernetes-the-kubespray-way-allow-internal \
--allow tcp,udp,icmp,ipip \ --allow tcp,udp,icmp,vxlan \
--network kubernetes-the-kubespray-way \ --network kubernetes-the-kubespray-way \
--source-ranges 10.240.0.0/24 --source-ranges 10.240.0.0/24
``` ```

View file

@ -21,7 +21,9 @@ Some variables of note include:
* *containerd_version* - Specify version of containerd to use when setting `container_manager` to `containerd` * *containerd_version* - Specify version of containerd to use when setting `container_manager` to `containerd`
* *docker_containerd_version* - Specify which version of containerd to use when setting `container_manager` to `docker` * *docker_containerd_version* - Specify which version of containerd to use when setting `container_manager` to `docker`
* *etcd_version* - Specify version of ETCD to use * *etcd_version* - Specify version of ETCD to use
* *ipip* - Enables Calico ipip encapsulation by default * *calico_ipip_mode* - Configures Calico ipip encapsulation - valid values are 'Never', 'Always' and 'CrossSubnet' (default 'Never')
* *calico_vxlan_mode* - Configures Calico vxlan encapsulation - valid values are 'Never', 'Always' and 'CrossSubnet' (default 'Always')
* *calico_network_backend* - Configures Calico network backend - valid values are 'none', 'bird' and 'vxlan' (default 'vxlan')
* *kube_network_plugin* - Sets k8s network plugin (default Calico) * *kube_network_plugin* - Sets k8s network plugin (default Calico)
* *kube_proxy_mode* - Changes k8s proxy mode to iptables mode * *kube_proxy_mode* - Changes k8s proxy mode to iptables mode
* *kube_version* - Specify a given Kubernetes version * *kube_version* - Specify a given Kubernetes version

View file

@ -75,15 +75,15 @@
# typha_max_connections_lower_limit: 300 # typha_max_connections_lower_limit: 300
# Set calico network backend: "bird", "vxlan" or "none" # Set calico network backend: "bird", "vxlan" or "none"
# bird enable BGP routing, required for ipip mode. # bird enable BGP routing, required for ipip and no encapsulation modes
# calico_network_backend: bird # calico_network_backend: vxlan
# IP in IP and VXLAN is mutualy exclusive modes. # IP in IP and VXLAN is mutualy exclusive modes.
# set IP in IP encapsulation mode: "Always", "CrossSubnet", "Never" # set IP in IP encapsulation mode: "Always", "CrossSubnet", "Never"
# calico_ipip_mode: 'Always' # calico_ipip_mode: 'Never'
# set VXLAN encapsulation mode: "Always", "CrossSubnet", "Never" # set VXLAN encapsulation mode: "Always", "CrossSubnet", "Never"
# calico_vxlan_mode: 'Never' # calico_vxlan_mode: 'Always'
# set VXLAN port and VNI # set VXLAN port and VNI
# calico_vxlan_vni: 4096 # calico_vxlan_vni: 4096

View file

@ -36,6 +36,24 @@
- kube_network_plugin is defined - kube_network_plugin is defined
- not ignore_assert_errors - not ignore_assert_errors
- name: Stop if legacy encapsulation variables are detected (ipip)
assert:
that:
- ipip is not defined
msg: "'ipip' configuration variable is deprecated, please configure your inventory with 'calico_ipip_mode' set to 'Always' or 'CrossSubnet' according to your specific needs"
when:
- kube_network_plugin == 'calico'
- not ignore_assert_errors
- name: Stop if legacy encapsulation variables are detected (ipip_mode)
assert:
that:
- ipip_mode is not defined
msg: "'ipip_mode' configuration variable is deprecated, please configure your inventory with 'calico_ipip_mode' set to 'Always' or 'CrossSubnet' according to your specific needs"
when:
- kube_network_plugin == 'calico'
- not ignore_assert_errors
- name: Stop if incompatible network plugin and cloudprovider - name: Stop if incompatible network plugin and cloudprovider
assert: assert:
that: that:

View file

@ -6,16 +6,17 @@ nat_outgoing: true
calico_pool_name: "default-pool" calico_pool_name: "default-pool"
calico_ipv4pool_ipip: "Off" calico_ipv4pool_ipip: "Off"
# Use IP-over-IP encapsulation across hosts # Change encapsulation mode, by default we enable vxlan which is the most mature and well tested mode
ipip: true calico_ipip_mode: Never # valid values are 'Always', 'Never' and 'CrossSubnet'
ipip_mode: "{{ 'Always' if ipip else 'Never' }}" # change to "CrossSubnet" if you only want ipip encapsulation on traffic going across subnets calico_vxlan_mode: Always # valid values are 'Always', 'Never' and 'CrossSubnet'
calico_ipip_mode: "{{ ipip_mode }}"
calico_vxlan_mode: 'Never'
calico_ipip_mode_ipv6: Never calico_ipip_mode_ipv6: Never
calico_vxlan_mode_ipv6: Never calico_vxlan_mode_ipv6: Never
calico_pool_blocksize_ipv6: 116 calico_pool_blocksize_ipv6: 116
# Calico network backend can be 'bird', 'vxlan' and 'none'
calico_network_backend: vxlan
calico_cert_dir: /etc/calico/certs calico_cert_dir: /etc/calico/certs
# Global as_num (/calico/bgp/v1/global/as_num) # Global as_num (/calico/bgp/v1/global/as_num)

View file

@ -11,8 +11,6 @@
that: that:
- "calico_network_backend in ['bird', 'vxlan', 'none']" - "calico_network_backend in ['bird', 'vxlan', 'none']"
msg: "calico network backend is not 'bird', 'vxlan' or 'none'" msg: "calico network backend is not 'bird', 'vxlan' or 'none'"
when:
- calico_network_backend is defined
- name: "Check ipip and vxlan mode defined correctly" - name: "Check ipip and vxlan mode defined correctly"
assert: assert:

View file

@ -194,7 +194,7 @@
- inventory_hostname == groups['kube_control_plane'][0] - inventory_hostname == groups['kube_control_plane'][0]
- 'calico_conf.stdout == "0"' - 'calico_conf.stdout == "0"'
- name: Calico | Configure calico ipv6 network pool (version >= v3.3.0) - name: Calico | Configure calico ipv6 network pool
command: command:
cmd: "{{ bin_dir }}/calicoctl.sh apply -f -" cmd: "{{ bin_dir }}/calicoctl.sh apply -f -"
stdin: > stdin: >

View file

@ -15,12 +15,12 @@ data:
# essential. # essential.
typha_service_name: "calico-typha" typha_service_name: "calico-typha"
{% endif %} {% endif %}
{% if calico_network_backend is defined %} {% if calico_network_backend == 'bird' %}
cluster_type: "kubespray"
calico_backend: "{{ calico_network_backend }}"
{% else %}
cluster_type: "kubespray,bgp" cluster_type: "kubespray,bgp"
calico_backend: "bird" calico_backend: "bird"
{% else %}
cluster_type: "kubespray"
calico_backend: "{{ calico_network_backend }}"
{% endif %} {% endif %}
{% if inventory_hostname in groups['k8s_cluster'] and peer_with_router|default(false) %} {% if inventory_hostname in groups['k8s_cluster'] and peer_with_router|default(false) %}
as: "{{ local_as|default(global_as_num) }}" as: "{{ local_as|default(global_as_num) }}"

View file

@ -176,7 +176,7 @@ spec:
- name: WAIT_FOR_DATASTORE - name: WAIT_FOR_DATASTORE
value: "true" value: "true"
{% endif %} {% endif %}
{% if calico_network_backend is defined and calico_network_backend == 'vxlan' %} {% if calico_network_backend == 'vxlan' %}
- name: FELIX_VXLANVNI - name: FELIX_VXLANVNI
value: "{{ calico_vxlan_vni }}" value: "{{ calico_vxlan_vni }}"
- name: FELIX_VXLANPORT - name: FELIX_VXLANPORT
@ -319,7 +319,7 @@ spec:
command: command:
- /bin/calico-node - /bin/calico-node
- -felix-live - -felix-live
{% if calico_network_backend|default("bird") == "bird" %} {% if calico_network_backend == "bird" %}
- -bird-live - -bird-live
{% endif %} {% endif %}
periodSeconds: 10 periodSeconds: 10
@ -330,7 +330,7 @@ spec:
exec: exec:
command: command:
- /bin/calico-node - /bin/calico-node
{% if calico_network_backend|default("bird") == "bird" %} {% if calico_network_backend == "bird" %}
- -bird-ready - -bird-ready
{% endif %} {% endif %}
- -felix-ready - -felix-ready

View file

@ -62,6 +62,14 @@ spec:
- --metrics-path={{ kube_router_metrics_path }} - --metrics-path={{ kube_router_metrics_path }}
- --metrics-port={{ kube_router_metrics_port }} - --metrics-port={{ kube_router_metrics_port }}
{% endif %} {% endif %}
{% if kube_router_enable_dsr %}
{% if container_manager == "docker" %}
- --runtime-endpoint=unix:///var/run/docker.sock
{% endif %}
{% if container_manager == "containerd" %}
{% endif %}
- --runtime-endpoint=unix:///run/containerd/containerd.sock
{% endif %}
{% for arg in kube_router_extra_args %} {% for arg in kube_router_extra_args %}
- "{{ arg }}" - "{{ arg }}"
{% endfor %} {% endfor %}
@ -86,9 +94,16 @@ spec:
privileged: true privileged: true
volumeMounts: volumeMounts:
{% if kube_router_enable_dsr %} {% if kube_router_enable_dsr %}
{% if container_manager == "docker" %}
- name: docker-socket - name: docker-socket
mountPath: /var/run/docker.sock mountPath: /var/run/docker.sock
readOnly: true readOnly: true
{% endif %}
{% if container_manager == "containerd" %}
- name: containerd-socket
mountPath: /run/containerd/containerd.sock
readOnly: true
{% endif %}
{% endif %} {% endif %}
- name: lib-modules - name: lib-modules
mountPath: /lib/modules mountPath: /lib/modules
@ -118,10 +133,18 @@ spec:
- operator: Exists - operator: Exists
volumes: volumes:
{% if kube_router_enable_dsr %} {% if kube_router_enable_dsr %}
{% if container_manager == "docker" %}
- name: docker-socket - name: docker-socket
hostPath: hostPath:
path: /var/run/docker.sock path: /var/run/docker.sock
type: Socket type: Socket
{% endif %}
{% if container_manager == "containerd" %}
- name: containerd-socket
hostPath:
path: /run/containerd/containerd.sock
type: Socket
{% endif %}
{% endif %} {% endif %}
- name: lib-modules - name: lib-modules
hostPath: hostPath:

View file

@ -79,4 +79,4 @@ create-vagrant:
cp /builds/kargo-ci/kubernetes-sigs-kubespray/inventory/sample/vagrant_ansible_inventory $(INVENTORY) cp /builds/kargo-ci/kubernetes-sigs-kubespray/inventory/sample/vagrant_ansible_inventory $(INVENTORY)
delete-vagrant: delete-vagrant:
vagrant destroy -f vagrant destroy -f

View file

@ -12,3 +12,11 @@ etcd_deployment_type: docker
# Make docker happy # Make docker happy
docker_containerd_version: latest docker_containerd_version: latest
# Pin disabling ipip mode to ensure proper upgrade
ipip: false
calico_vxlan_mode: Always
calico_network_backend: bird
# Needed to bypass deprecation check
ignore_assert_errors: true

View file

@ -6,3 +6,11 @@ mode: default
# Docker specific settings: # Docker specific settings:
container_manager: docker container_manager: docker
etcd_deployment_type: docker etcd_deployment_type: docker
# Pin disabling ipip mode to ensure proper upgrade
ipip: false
calico_vxlan_mode: Always
calico_network_backend: bird
# Needed to bypass deprecation check
ignore_assert_errors: true

View file

@ -0,0 +1,15 @@
$num_instances = 2
$vm_memory ||= 2048
$os = "centos"
$kube_master_instances = 1
$etcd_instances = 1
# For CI we are not worried about data persistence across reboot
$libvirt_volume_cache = "unsafe"
# Checking for box update can trigger API rate limiting
# https://www.vagrantup.com/docs/vagrant-cloud/request-limits.html
$box_check_update = false
$network_plugin = "kube-router"

View file

@ -0,0 +1,15 @@
$num_instances = 2
$vm_memory ||= 2048
$os = "fedora35"
$kube_master_instances = 1
$etcd_instances = 1
# For CI we are not worried about data persistence across reboot
$libvirt_volume_cache = "unsafe"
# Checking for box update can trigger API rate limiting
# https://www.vagrantup.com/docs/vagrant-cloud/request-limits.html
$box_check_update = false
$network_plugin = "kube-router"

View file

@ -0,0 +1,15 @@
$num_instances = 2
$vm_memory ||= 2048
$os = "ubuntu1604"
$kube_master_instances = 1
$etcd_instances = 1
# For CI we are not worried about data persistence across reboot
$libvirt_volume_cache = "unsafe"
# Checking for box update can trigger API rate limiting
# https://www.vagrantup.com/docs/vagrant-cloud/request-limits.html
$box_check_update = false
$network_plugin = "kube-router"

View file

@ -0,0 +1,10 @@
$os = "ubuntu1604"
# For CI we are not worried about data persistence across reboot
$libvirt_volume_cache = "unsafe"
# Checking for box update can trigger API rate limiting
# https://www.vagrantup.com/docs/vagrant-cloud/request-limits.html
$box_check_update = false
$network_plugin = "kube-router"

View file

@ -62,7 +62,6 @@
- debug: # noqa unnamed-task - debug: # noqa unnamed-task
var: nca_pod.stdout_lines var: nca_pod.stdout_lines
failed_when: not nca_pod is success
when: inventory_hostname == groups['kube_control_plane'][0] when: inventory_hostname == groups['kube_control_plane'][0]
- name: Get netchecker agents - name: Get netchecker agents
@ -78,16 +77,7 @@
agents.content[0] == '{' and agents.content[0] == '{' and
agents.content|from_json|length >= groups['k8s_cluster']|intersect(ansible_play_hosts)|length * 2 agents.content|from_json|length >= groups['k8s_cluster']|intersect(ansible_play_hosts)|length * 2
failed_when: false failed_when: false
no_log: true no_log: false
- debug: # noqa unnamed-task
var: agents.content | from_json
failed_when: not agents is success and not agents.content=='{}'
run_once: true
when:
- agents.content is defined
- agents.content
- agents.content[0] == '{'
- name: Check netchecker status - name: Check netchecker status
uri: uri:
@ -96,12 +86,12 @@
return_content: yes return_content: yes
delegate_to: "{{ groups['kube_control_plane'][0] }}" delegate_to: "{{ groups['kube_control_plane'][0] }}"
run_once: true run_once: true
register: result register: connectivity_check
retries: 3 retries: 3
delay: "{{ agent_report_interval }}" delay: "{{ agent_report_interval }}"
until: result.content|length > 0 and until: connectivity_check.content|length > 0 and
result.content[0] == '{' connectivity_check.content[0] == '{'
no_log: true no_log: false
failed_when: false failed_when: false
when: when:
- agents.content != '{}' - agents.content != '{}'
@ -109,20 +99,19 @@
- debug: # noqa unnamed-task - debug: # noqa unnamed-task
var: ncs_pod var: ncs_pod
run_once: true run_once: true
when: not result is success
- name: Get kube-proxy logs - name: Get kube-proxy logs
command: "{{ bin_dir }}/kubectl -n kube-system logs -l k8s-app=kube-proxy" command: "{{ bin_dir }}/kubectl -n kube-system logs -l k8s-app=kube-proxy"
no_log: false no_log: false
when: when:
- inventory_hostname == groups['kube_control_plane'][0] - inventory_hostname == groups['kube_control_plane'][0]
- not result is success - not connectivity_check is success
- name: Get logs from other apps - name: Get logs from other apps
command: "{{ bin_dir }}/kubectl -n kube-system logs -l k8s-app={{ item }} --all-containers" command: "{{ bin_dir }}/kubectl -n kube-system logs -l k8s-app={{ item }} --all-containers"
when: when:
- inventory_hostname == groups['kube_control_plane'][0] - inventory_hostname == groups['kube_control_plane'][0]
- not result is success - not connectivity_check is success
no_log: false no_log: false
with_items: with_items:
- kube-router - kube-router
@ -131,27 +120,51 @@
- calico-node - calico-node
- cilium - cilium
- debug: # noqa unnamed-task - name: Parse agents list
var: result.content | from_json set_fact:
failed_when: not result is success agents_check_result: "{{ agents.content | from_json }}"
delegate_to: "{{ groups['kube_control_plane'][0] }}"
run_once: true run_once: true
when: when:
- not agents.content == '{}' - agents is success
- result.content - agents.content is defined
- result.content[0] == '{' - agents.content[0] == '{'
- debug: # noqa unnamed-task - debug: # noqa unnamed-task
var: result var: agents_check_result
failed_when: not result is success delegate_to: "{{ groups['kube_control_plane'][0] }}"
run_once: true run_once: true
when: when:
- not agents.content == '{}' - agents_check_result is defined
- name: Parse connectivity check
set_fact:
connectivity_check_result: "{{ connectivity_check.content | from_json }}"
delegate_to: "{{ groups['kube_control_plane'][0] }}"
run_once: true
when:
- connectivity_check is success
- connectivity_check.content is defined
- connectivity_check.content[0] == '{'
- debug: # noqa unnamed-task - debug: # noqa unnamed-task
msg: "Cannot get reports from agents, consider as PASSING" var: connectivity_check_result
delegate_to: "{{ groups['kube_control_plane'][0] }}"
run_once: true run_once: true
when: when:
- agents.content == '{}' - connectivity_check_result is defined
- name: Check connectivity with all netchecker agents
assert:
that:
- agents_check_result is defined
- connectivity_check_result is defined
- agents_check_result.keys() | length > 0
- not connectivity_check_result.Absent
- not connectivity_check_result.Outdated
msg: "Connectivity check to netchecker agents failed"
delegate_to: "{{ groups['kube_control_plane'][0] }}"
run_once: true
- name: Create macvlan network conf - name: Create macvlan network conf
# We cannot use only shell: below because Ansible will render the text # We cannot use only shell: below because Ansible will render the text