contiv network support (#1914)
* Add Contiv support Contiv is a network plugin for Kubernetes and Docker. It supports vlan/vxlan/BGP/Cisco ACI technologies. It support firewall policies, multiple networks and bridging pods onto physical networks. * Update contiv version to 1.1.4 Update contiv version to 1.1.4 and added SVC_SUBNET in contiv-config. * Load openvswitch module to workaround on CentOS7.4 * Set contiv cni version to 0.1.0 Correct contiv CNI version to 0.1.0. * Use kube_apiserver_endpoint for K8S_API_SERVER Use kube_apiserver_endpoint as K8S_API_SERVER to make contiv talks to a available endpoint no matter if there's a loadbalancer or not. * Make contiv use its own etcd Before this commit, contiv is using a etcd proxy mode to k8s etcd, this work fine when the etcd hosts are co-located with contiv etcd proxy, however the k8s peering certs are only in etcd group, as a result the etcd-proxy is not able to peering with the k8s etcd on etcd group, plus the netplugin is always trying to find the etcd endpoint on localhost, this will cause problem for all netplugins not runnign on etcd group nodes. This commit make contiv uses its own etcd, separate from k8s one. on kube-master nodes (where net-master runs), it will run as leader mode and on all rest nodes it will run as proxy mode. * Use cp instead of rsync to copy cni binaries Since rsync has been removed from hyperkube, this commit changes it to use cp instead. * Make contiv-etcd able to run on master nodes * Add rbac_enabled flag for contiv pods * Add contiv into CNI network plugin lists * migrate contiv test to tests/files Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com> * Add required rules for contiv netplugin * Better handling json return of fwdMode * Make contiv etcd port configurable * Use default var instead of templating * roles/download/defaults/main.yml: use contiv 1.1.7 Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>
This commit is contained in:
parent
de422c822d
commit
e5d353d0a7
30 changed files with 851 additions and 5 deletions
|
@ -250,6 +250,10 @@ before_script:
|
|||
# stage: deploy-gce-part1
|
||||
MOVED_TO_GROUP_VARS: "true"
|
||||
|
||||
.ubuntu_contiv_sep_variables: &ubuntu_contiv_sep_variables
|
||||
# stage: deploy-gce-special
|
||||
MOVED_TO_GROUP_VARS: "true"
|
||||
|
||||
.rhel7_weave_variables: &rhel7_weave_variables
|
||||
# stage: deploy-gce-part1
|
||||
MOVED_TO_GROUP_VARS: "true"
|
||||
|
@ -422,6 +426,17 @@ centos-weave-kubeadm-triggers:
|
|||
when: on_success
|
||||
only: ['triggers']
|
||||
|
||||
ubuntu-contiv-sep:
|
||||
stage: deploy-gce-special
|
||||
<<: *job
|
||||
<<: *gce
|
||||
variables:
|
||||
<<: *gce_variables
|
||||
<<: *ubuntu_contiv_sep_variables
|
||||
when: manual
|
||||
except: ['triggers']
|
||||
only: ['master', /^pr-.*$/]
|
||||
|
||||
rhel7-weave:
|
||||
stage: deploy-gce-part1
|
||||
<<: *job
|
||||
|
|
|
@ -59,6 +59,7 @@ Versions of supported components
|
|||
[flanneld](https://github.com/coreos/flannel/releases) v0.8.0 <br>
|
||||
[calico](https://docs.projectcalico.org/v2.5/releases/) v2.5.0 <br>
|
||||
[canal](https://github.com/projectcalico/canal) (given calico/flannel versions) <br>
|
||||
[contiv](https://github.com/contiv/install/releases) v1.0.3 <br>
|
||||
[weave](http://weave.works/) v2.0.1 <br>
|
||||
[docker](https://www.docker.com/) v1.13 (see note)<br>
|
||||
[rkt](https://coreos.com/rkt/docs/latest/) v1.21.0 (see Note 2)<br>
|
||||
|
@ -93,6 +94,9 @@ You can choose between 4 network plugins. (default: `calico`, except Vagrant use
|
|||
|
||||
* [**canal**](https://github.com/projectcalico/canal): a composition of calico and flannel plugins.
|
||||
|
||||
* [**contiv**](docs/contiv.md): supports vlan, vxlan, bgp and Cisco SDN networking. This plugin is able to
|
||||
apply firewall policies, segregate containers in multiple network and bridging pods onto physical networks.
|
||||
|
||||
* [**weave**](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster. <br>
|
||||
(Please refer to `weave` [troubleshooting documentation](http://docs.weave.works/weave/latest_release/troubleshooting.html)).
|
||||
|
||||
|
|
74
docs/contiv.md
Normal file
74
docs/contiv.md
Normal file
|
@ -0,0 +1,74 @@
|
|||
Contiv
|
||||
======
|
||||
|
||||
Here is the [Contiv documentation](http://contiv.github.io/documents/).
|
||||
|
||||
## Administrate Contiv
|
||||
|
||||
There are two ways to manage Contiv:
|
||||
|
||||
* a web UI managed by the api proxy service
|
||||
* a CLI named `netctl`
|
||||
|
||||
|
||||
### Interfaces
|
||||
|
||||
#### The Web Interface
|
||||
|
||||
This UI is hosted on all kubernetes master nodes. The service is available at `https://<one of your master node>:10000`.
|
||||
|
||||
You can configure the api proxy by overriding the following variables:
|
||||
|
||||
```yaml
|
||||
contiv_enable_api_proxy: true
|
||||
contiv_api_proxy_port: 10000
|
||||
contiv_generate_certificate: true
|
||||
```
|
||||
|
||||
The default credentials to log in are: admin/admin.
|
||||
|
||||
|
||||
#### The Command Line Interface
|
||||
|
||||
The second way to modify the Contiv configuration is to use the CLI. To do this, you have to connect to the server and export an environment variable to tell netctl how to connect to the cluster:
|
||||
|
||||
```bash
|
||||
export NETMASTER=http://127.0.0.1:9999
|
||||
```
|
||||
|
||||
The port can be changed by overriding the following variable:
|
||||
|
||||
```yaml
|
||||
contiv_netmaster_port: 9999
|
||||
```
|
||||
|
||||
The CLI doesn't use the authentication process needed by the web interface.
|
||||
|
||||
|
||||
### Network configuration
|
||||
|
||||
The default configuration uses VXLAN to create an overlay. Two networks are created by default:
|
||||
|
||||
* `contivh1`: an infrastructure network. It allows nodes to access the pods IPs. It is mandatory in a Kubernetes environment that uses VXLAN.
|
||||
* `default-net` : the default network that hosts pods.
|
||||
|
||||
You can change the default network configuration by overriding the `contiv_networks` variable.
|
||||
|
||||
The default forward mode is set to routing:
|
||||
|
||||
```yaml
|
||||
contiv_fwd_mode: routing
|
||||
```
|
||||
|
||||
The following is an example of how you can use VLAN instead of VXLAN:
|
||||
|
||||
```yaml
|
||||
contiv_fwd_mode: bridge
|
||||
contiv_vlan_interface: eth0
|
||||
contiv_networks:
|
||||
- name: default-net
|
||||
subnet: "{{ kube_pods_subnet }}"
|
||||
gateway: "{{ kube_pods_subnet|ipaddr('net')|ipaddr(1)|ipaddr('address') }}"
|
||||
encap: vlan
|
||||
pkt_tag: 10
|
||||
```
|
|
@ -65,7 +65,7 @@ kube_users:
|
|||
# kube_oidc_groups_claim: groups
|
||||
|
||||
|
||||
# Choose network plugin (calico, weave or flannel)
|
||||
# Choose network plugin (calico, contiv, weave or flannel)
|
||||
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
|
||||
kube_network_plugin: calico
|
||||
|
||||
|
|
|
@ -38,6 +38,7 @@ flannel_version: "v0.9.1"
|
|||
flannel_cni_version: "v0.3.0"
|
||||
weave_version: 2.0.5
|
||||
pod_infra_version: 3.0
|
||||
contiv_version: 1.1.7
|
||||
|
||||
# Download URLs
|
||||
kubeadm_download_url: "https://storage.googleapis.com/kubernetes-release/release/{{ kubeadm_version }}/bin/linux/amd64/kubeadm"
|
||||
|
@ -89,6 +90,10 @@ weave_kube_image_repo: "weaveworks/weave-kube"
|
|||
weave_kube_image_tag: "{{ weave_version }}"
|
||||
weave_npc_image_repo: "weaveworks/weave-npc"
|
||||
weave_npc_image_tag: "{{ weave_version }}"
|
||||
contiv_image_repo: "contiv/netplugin"
|
||||
contiv_image_tag: "{{ contiv_version }}"
|
||||
contiv_auth_proxy_image_repo: "contiv/auth_proxy"
|
||||
contiv_auth_proxy_image_tag: "{{ contiv_version }}"
|
||||
|
||||
nginx_image_repo: nginx
|
||||
nginx_image_tag: 1.13
|
||||
|
@ -224,6 +229,18 @@ downloads:
|
|||
repo: "{{ weave_npc_image_repo }}"
|
||||
tag: "{{ weave_npc_image_tag }}"
|
||||
sha256: "{{ weave_npc_digest_checksum|default(None) }}"
|
||||
contiv:
|
||||
enabled: "{{ kube_network_plugin == 'contiv' }}"
|
||||
container: true
|
||||
repo: "{{ contiv_image_repo }}"
|
||||
tag: "{{ contiv_image_tag }}"
|
||||
sha256: "{{ contiv_digest_checksum|default(None) }}"
|
||||
contiv_auth_proxy:
|
||||
enabled: "{{ kube_network_plugin == 'contiv' }}"
|
||||
container: true
|
||||
repo: "{{ contiv_auth_proxy_image_repo }}"
|
||||
tag: "{{ contiv_auth_proxy_image_tag }}"
|
||||
sha256: "{{ contiv_auth_proxy_digest_checksum|default(None) }}"
|
||||
pod_infra:
|
||||
enabled: true
|
||||
container: true
|
||||
|
|
|
@ -0,0 +1,72 @@
|
|||
---
|
||||
|
||||
- name: Contiv | Wait for netmaster
|
||||
uri:
|
||||
url: "http://127.0.0.1:{{ contiv_netmaster_port }}/info"
|
||||
register: result
|
||||
until: result.status is defined and result.status == 200
|
||||
retries: 10
|
||||
delay: 5
|
||||
|
||||
- name: Contiv | Get global configuration
|
||||
command: |
|
||||
{{ bin_dir }}/netctl --netmaster "http://127.0.0.1:{{ contiv_netmaster_port }}" \
|
||||
global info --json --all
|
||||
register: global_config
|
||||
run_once: true
|
||||
changed_when: false
|
||||
|
||||
- set_fact:
|
||||
contiv_global_config: "{{ (global_config.stdout|from_json)[0] }}"
|
||||
|
||||
- name: Contiv | Set global forwarding mode
|
||||
command: |
|
||||
{{ bin_dir }}/netctl --netmaster "http://127.0.0.1:{{ contiv_netmaster_port }}" \
|
||||
global set --fwd-mode={{ contiv_fwd_mode }}
|
||||
when: "contiv_global_config.get('fwdMode', '') != contiv_fwd_mode"
|
||||
run_once: true
|
||||
|
||||
- name: Contiv | Set global fabric mode
|
||||
command: |
|
||||
{{ bin_dir }}/netctl --netmaster "http://127.0.0.1:{{ contiv_netmaster_port }}" \
|
||||
global set --fabric-mode={{ contiv_fabric_mode }}
|
||||
when: "contiv_global_config.networkInfraType != contiv_fabric_mode"
|
||||
run_once: true
|
||||
|
||||
- name: Contiv | Get existing networks
|
||||
command: |
|
||||
{{ bin_dir }}/netctl --netmaster "http://127.0.0.1:{{ contiv_netmaster_port }}" \
|
||||
net ls -q
|
||||
register: net_result
|
||||
run_once: true
|
||||
changed_when: false
|
||||
|
||||
- name: Contiv | Create networks
|
||||
command: |
|
||||
{{ bin_dir }}/netctl --netmaster "http://127.0.0.1:{{ contiv_netmaster_port }}" \
|
||||
net create \
|
||||
--encap={{ item.encap|default("vxlan") }} \
|
||||
--gateway={{ item.gateway }} \
|
||||
--nw-type={{ item.nw_type|default("data") }} \
|
||||
--pkt-tag={{ item.pkt_tag|default("0") }} \
|
||||
--subnet={{ item.subnet }} \
|
||||
--tenant={{ item.tenant|default("default") }} \
|
||||
"{{ item.name }}"
|
||||
with_items: "{{ contiv_networks }}"
|
||||
when: item['name'] not in net_result.stdout_lines
|
||||
run_once: true
|
||||
|
||||
- name: Contiv | Check if default group exists
|
||||
command: |
|
||||
{{ bin_dir }}/netctl --netmaster "http://127.0.0.1:{{ contiv_netmaster_port }}" \
|
||||
group ls -q
|
||||
register: group_result
|
||||
run_once: true
|
||||
changed_when: false
|
||||
|
||||
- name: Contiv | Create default group
|
||||
command: |
|
||||
{{ bin_dir }}/netctl --netmaster "http://127.0.0.1:{{ contiv_netmaster_port }}" \
|
||||
group create default-net default
|
||||
when: "'default' not in group_result.stdout_lines"
|
||||
run_once: true
|
15
roles/kubernetes-apps/network_plugin/contiv/tasks/main.yml
Normal file
15
roles/kubernetes-apps/network_plugin/contiv/tasks/main.yml
Normal file
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
|
||||
- name: Contiv | Create Kubernetes resources
|
||||
kube:
|
||||
name: "{{ item.item.name }}"
|
||||
namespace: "{{ system_namespace }}"
|
||||
kubectl: "{{ bin_dir }}/kubectl"
|
||||
resource: "{{ item.item.type }}"
|
||||
filename: "{{ contiv_config_dir }}/{{ item.item.file }}"
|
||||
state: "{{ item.changed | ternary('latest','present') }}"
|
||||
with_items: "{{ contiv_manifests_results.results }}"
|
||||
delegate_to: "{{ groups['kube-master'][0] }}"
|
||||
run_once: true
|
||||
|
||||
- include: configure.yml
|
|
@ -15,6 +15,11 @@ dependencies:
|
|||
tags:
|
||||
- flannel
|
||||
|
||||
- role: kubernetes-apps/network_plugin/contiv
|
||||
when: kube_network_plugin == 'contiv'
|
||||
tags:
|
||||
- contiv
|
||||
|
||||
- role: kubernetes-apps/network_plugin/weave
|
||||
when: kube_network_plugin == 'weave'
|
||||
tags:
|
||||
|
|
|
@ -55,7 +55,7 @@ KUBELET_HOSTNAME="--hostname-override={{ kube_override_hostname }}"
|
|||
|
||||
|
||||
KUBELET_ARGS="{{ kubelet_args_base }} {{ kubelet_args_dns }} {{ kubelet_reserve }}"
|
||||
{% if kube_network_plugin is defined and kube_network_plugin in ["calico", "canal", "flannel", "weave"] %}
|
||||
{% if kube_network_plugin is defined and kube_network_plugin in ["calico", "canal", "flannel", "weave", "contiv"] %}
|
||||
KUBELET_NETWORK_PLUGIN="--network-plugin=cni --network-plugin-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
|
||||
{% elif kube_network_plugin is defined and kube_network_plugin == "cloud" %}
|
||||
KUBELET_NETWORK_PLUGIN="--hairpin-mode=promiscuous-bridge --network-plugin=kubenet"
|
||||
|
|
|
@ -35,7 +35,7 @@ ExecStart=/usr/bin/rkt run \
|
|||
{% if local_volumes_enabled == true %}
|
||||
--volume local-volume-base-dir,kind=host,source={{ local_volume_base_dir }},readOnly=false,recursive=true \
|
||||
{% endif %}
|
||||
{% if kube_network_plugin in ["calico", "weave", "canal", "flannel"] %}
|
||||
{% if kube_network_plugin in ["calico", "weave", "canal", "flannel", "contiv"] %}
|
||||
--volume etc-cni,kind=host,source=/etc/cni,readOnly=true \
|
||||
--volume opt-cni,kind=host,source=/opt/cni,readOnly=true \
|
||||
--volume var-lib-cni,kind=host,source=/var/lib/cni,readOnly=false \
|
||||
|
|
|
@ -76,7 +76,7 @@ KUBELET_HOSTNAME="--hostname-override={{ kube_override_hostname }}"
|
|||
{% endif %}
|
||||
|
||||
KUBELET_ARGS="{{ kubelet_args_base }} {{ kubelet_args_dns }} {{ kubelet_args_kubeconfig }} {{ kubelet_reserve }} {{ node_labels }} {% if kube_feature_gates %} --feature-gates={{ kube_feature_gates|join(',') }} {% endif %} {% if kubelet_custom_flags is string %} {{kubelet_custom_flags}} {% else %}{% for flag in kubelet_custom_flags %} {{flag}} {% endfor %}{% endif %}"
|
||||
{% if kube_network_plugin is defined and kube_network_plugin in ["calico", "canal", "flannel", "weave"] %}
|
||||
{% if kube_network_plugin is defined and kube_network_plugin in ["calico", "canal", "flannel", "weave", "contiv"] %}
|
||||
KUBELET_NETWORK_PLUGIN="--network-plugin=cni --network-plugin-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
|
||||
{% elif kube_network_plugin is defined and kube_network_plugin == "weave" %}
|
||||
DOCKER_SOCKET="--docker-endpoint=unix:/var/run/weave/weave.sock"
|
||||
|
|
|
@ -89,13 +89,14 @@
|
|||
- "/etc/cni/net.d"
|
||||
- "/opt/cni/bin"
|
||||
when:
|
||||
- kube_network_plugin in ["calico", "weave", "canal", "flannel"]
|
||||
- kube_network_plugin in ["calico", "weave", "canal", "flannel", "contiv"]
|
||||
- inventory_hostname in groups['k8s-cluster']
|
||||
tags:
|
||||
- network
|
||||
- calico
|
||||
- weave
|
||||
- canal
|
||||
- contiv
|
||||
- bootstrap-os
|
||||
|
||||
- include: resolvconf.yml
|
||||
|
|
41
roles/network_plugin/contiv/defaults/main.yml
Normal file
41
roles/network_plugin/contiv/defaults/main.yml
Normal file
|
@ -0,0 +1,41 @@
|
|||
---
|
||||
|
||||
contiv_config_dir: "{{ kube_config_dir }}/contiv"
|
||||
contiv_etcd_conf_dir: "/etc/contiv/etcd/"
|
||||
contiv_etcd_data_dir: "/var/lib/etcd/contiv-data"
|
||||
contiv_netmaster_port: 9999
|
||||
contiv_cni_version: 0.1.0
|
||||
|
||||
contiv_etcd_listen_ip: "{{ ip | default(ansible_default_ipv4['address']) }}"
|
||||
contiv_etcd_listen_port: 6666
|
||||
contiv_etcd_peer_port: 6667
|
||||
contiv_etcd_ad_urls: http://{{ contiv_etcd_listen_ip }}:{{ contiv_etcd_listen_port }}
|
||||
contiv_etcd_peer_urls: http://{{ contiv_etcd_listen_ip }}:{{ contiv_etcd_peer_port }}
|
||||
contiv_etcd_listen_urls:
|
||||
- http://{{ contiv_etcd_listen_ip }}:{{ contiv_etcd_listen_port }}
|
||||
- http://127.0.0.1:{{ contiv_etcd_listen_port }}
|
||||
|
||||
# Parameters for Contiv api-proxy
|
||||
contiv_enable_api_proxy: true
|
||||
contiv_api_proxy_port: 10000
|
||||
contiv_generate_certificate: true
|
||||
|
||||
# Forwarding mode: bridge or routing
|
||||
contiv_fwd_mode: routing
|
||||
|
||||
# Fabric mode: aci, aci-opflex or default
|
||||
contiv_fabric_mode: default
|
||||
|
||||
# Dataplane interface
|
||||
contiv_vlan_interface: ""
|
||||
|
||||
# Default network configuration
|
||||
contiv_networks:
|
||||
- name: contivh1
|
||||
subnet: "10.233.128.0/18"
|
||||
gateway: "10.233.128.1"
|
||||
nw_type: infra
|
||||
- name: default-net
|
||||
subnet: "{{ kube_pods_subnet }}"
|
||||
gateway: "{{ kube_pods_subnet|ipaddr('net')|ipaddr(1)|ipaddr('address') }}"
|
||||
pkt_tag: 10
|
23
roles/network_plugin/contiv/files/generate-certificate.sh
Normal file
23
roles/network_plugin/contiv/files/generate-certificate.sh
Normal file
|
@ -0,0 +1,23 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
PREFIX="/var/contiv"
|
||||
KEY_PATH="$PREFIX/auth_proxy_key.pem"
|
||||
CERT_PATH="$PREFIX/auth_proxy_cert.pem"
|
||||
|
||||
# if both files exist, just exit
|
||||
if [[ -f $KEY_PATH && -f $CERT_PATH ]]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
mkdir -p "$PREFIX"
|
||||
|
||||
rm -f $KEY_PATH
|
||||
rm -f $CERT_PATH
|
||||
|
||||
openssl genrsa -out $KEY_PATH 2048 >/dev/null 2>&1
|
||||
openssl req -new -x509 -sha256 -days 3650 \
|
||||
-key $KEY_PATH \
|
||||
-out $CERT_PATH \
|
||||
-subj "/C=US/ST=CA/L=San Jose/O=CPSG/OU=IT Department/CN=auth-local.cisco.com"
|
6
roles/network_plugin/contiv/handlers/main.yml
Normal file
6
roles/network_plugin/contiv/handlers/main.yml
Normal file
|
@ -0,0 +1,6 @@
|
|||
---
|
||||
- name: Contiv | Reload kernel modules
|
||||
service:
|
||||
name: systemd-modules-load
|
||||
state: restarted
|
||||
enabled: yes
|
121
roles/network_plugin/contiv/tasks/main.yml
Normal file
121
roles/network_plugin/contiv/tasks/main.yml
Normal file
|
@ -0,0 +1,121 @@
|
|||
---
|
||||
- name: Contiv | Load openvswitch kernel module
|
||||
copy:
|
||||
dest: /etc/modules-load.d/openvswitch.conf
|
||||
content: "openvswitch"
|
||||
notify:
|
||||
- Contiv | Reload kernel modules
|
||||
|
||||
- name: Contiv | Create contiv etcd directories
|
||||
file:
|
||||
dest: "{{ item }}"
|
||||
state: directory
|
||||
mode: 0750
|
||||
owner: root
|
||||
group: root
|
||||
with_items:
|
||||
- "{{ contiv_etcd_conf_dir }}"
|
||||
- "{{ contiv_etcd_data_dir }}"
|
||||
|
||||
- name: Contiv | Create contiv etcd config env
|
||||
template:
|
||||
src: contiv-etcd.env.j2
|
||||
dest: "{{ contiv_etcd_conf_dir }}/contiv-etcd.env"
|
||||
|
||||
- set_fact:
|
||||
contiv_config_dir: "{{ contiv_config_dir }}"
|
||||
contiv_enable_api_proxy: "{{ contiv_enable_api_proxy }}"
|
||||
contiv_fabric_mode: "{{ contiv_fabric_mode }}"
|
||||
contiv_fwd_mode: "{{ contiv_fwd_mode }}"
|
||||
contiv_netmaster_port: "{{ contiv_netmaster_port }}"
|
||||
contiv_networks: "{{ contiv_networks }}"
|
||||
contiv_manifests:
|
||||
- {name: contiv-config, file: contiv-config.yml, type: configmap}
|
||||
- {name: contiv-netmaster, file: contiv-netmaster-clusterrolebinding.yml, type: clusterrolebinding}
|
||||
- {name: contiv-netmaster, file: contiv-netmaster-clusterrole.yml, type: clusterrole}
|
||||
- {name: contiv-netmaster, file: contiv-netmaster-serviceaccount.yml, type: serviceaccount}
|
||||
- {name: contiv-netplugin, file: contiv-netplugin-clusterrolebinding.yml, type: clusterrolebinding}
|
||||
- {name: contiv-netplugin, file: contiv-netplugin-clusterrole.yml, type: clusterrole}
|
||||
- {name: contiv-netplugin, file: contiv-netplugin-serviceaccount.yml, type: serviceaccount}
|
||||
- {name: contiv-etcd, file: contiv-etcd.yml, type: daemonset}
|
||||
- {name: contiv-netplugin, file: contiv-netplugin.yml, type: daemonset}
|
||||
- {name: contiv-netmaster, file: contiv-netmaster.yml, type: daemonset}
|
||||
|
||||
- set_fact:
|
||||
contiv_manifests: |-
|
||||
{% set _ = contiv_manifests.append({"name": "contiv-api-proxy", "file": "contiv-api-proxy.yml", "type": "daemonset"}) %}
|
||||
{{ contiv_manifests }}
|
||||
when: contiv_enable_api_proxy
|
||||
|
||||
- name: Contiv | Create /var/contiv
|
||||
file:
|
||||
path: /var/contiv
|
||||
state: directory
|
||||
|
||||
- name: Contiv | Create contiv config directory
|
||||
file:
|
||||
dest: "{{ contiv_config_dir }}"
|
||||
state: directory
|
||||
mode: 0755
|
||||
owner: root
|
||||
group: root
|
||||
|
||||
- name: Contiv | Install all Kubernetes resources
|
||||
template:
|
||||
src: "{{ item.file }}.j2"
|
||||
dest: "{{ contiv_config_dir }}/{{ item.file }}"
|
||||
with_items: "{{ contiv_manifests }}"
|
||||
delegate_to: "{{ groups['kube-master'][0] }}"
|
||||
run_once: true
|
||||
register: contiv_manifests_results
|
||||
|
||||
- name: Contiv | Generate contiv-api-proxy certificates
|
||||
script: generate-certificate.sh
|
||||
args:
|
||||
creates: /var/contiv/auth_proxy_key.pem
|
||||
when: "contiv_enable_api_proxy and contiv_generate_certificate"
|
||||
delegate_to: "{{ groups['kube-master'][0] }}"
|
||||
run_once: true
|
||||
|
||||
- name: Contiv | Fetch the generated certificate
|
||||
fetch:
|
||||
src: "/var/contiv/{{ item }}"
|
||||
dest: "/tmp/kubespray-contiv-{{ item }}"
|
||||
flat: yes
|
||||
with_items:
|
||||
- auth_proxy_key.pem
|
||||
- auth_proxy_cert.pem
|
||||
when: "contiv_enable_api_proxy and contiv_generate_certificate"
|
||||
delegate_to: "{{ groups['kube-master'][0] }}"
|
||||
run_once: true
|
||||
|
||||
- name: Contiv | Copy the generated certificate on nodes
|
||||
copy:
|
||||
src: "/tmp/kubespray-contiv-{{ item }}"
|
||||
dest: "/var/contiv/{{ item }}"
|
||||
with_items:
|
||||
- auth_proxy_key.pem
|
||||
- auth_proxy_cert.pem
|
||||
when: "inventory_hostname != groups['kube-master'][0]
|
||||
and inventory_hostname in groups['kube-master']
|
||||
and contiv_enable_api_proxy and contiv_generate_certificate"
|
||||
|
||||
- name: Contiv | Copy cni plugins from hyperkube
|
||||
command: "{{ docker_bin_dir }}/docker run --rm -v /opt/cni/bin:/cnibindir {{ hyperkube_image_repo }}:{{ hyperkube_image_tag }} /bin/bash -c '/bin/cp -a /opt/cni/bin/* /cnibindir/'"
|
||||
register: cni_task_result
|
||||
until: cni_task_result.rc == 0
|
||||
retries: 4
|
||||
delay: "{{ retry_stagger | random + 3 }}"
|
||||
changed_when: false
|
||||
tags: [hyperkube, upgrade]
|
||||
|
||||
- name: Contiv | Copy netctl binary from docker container
|
||||
command: sh -c "{{ docker_bin_dir }}/docker rm -f netctl-binarycopy;
|
||||
{{ docker_bin_dir }}/docker create --name netctl-binarycopy {{ contiv_image_repo }}:{{ contiv_image_tag }} &&
|
||||
{{ docker_bin_dir }}/docker cp netctl-binarycopy:/contiv/bin/netctl {{ bin_dir }}/netctl &&
|
||||
{{ docker_bin_dir }}/docker rm -f netctl-binarycopy"
|
||||
register: contiv_task_result
|
||||
until: contiv_task_result.rc == 0
|
||||
retries: 4
|
||||
delay: "{{ retry_stagger | random + 3 }}"
|
||||
changed_when: false
|
|
@ -0,0 +1,59 @@
|
|||
# This manifest deploys the Contiv API Proxy Server on Kubernetes.
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: contiv-api-proxy
|
||||
namespace: {{ system_namespace }}
|
||||
labels:
|
||||
k8s-app: contiv-api-proxy
|
||||
spec:
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
template:
|
||||
metadata:
|
||||
name: contiv-api-proxy
|
||||
namespace: {{ system_namespace }}
|
||||
labels:
|
||||
k8s-app: contiv-api-proxy
|
||||
annotations:
|
||||
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||
spec:
|
||||
# The API proxy must run in the host network namespace so that
|
||||
# it isn't governed by policy that would prevent it from working.
|
||||
hostNetwork: true
|
||||
hostPID: true
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/master: "true"
|
||||
tolerations:
|
||||
- key: node-role.kubernetes.io/master
|
||||
effect: NoSchedule
|
||||
{% if rbac_enabled %}
|
||||
serviceAccountName: contiv-netmaster
|
||||
{% endif %}
|
||||
containers:
|
||||
- name: contiv-api-proxy
|
||||
image: {{ contiv_auth_proxy_image_repo }}:{{ contiv_auth_proxy_image_tag }}
|
||||
args:
|
||||
- --listen-address=0.0.0.0:{{ contiv_api_proxy_port }}
|
||||
- --tls-key-file=/var/contiv/auth_proxy_key.pem
|
||||
- --tls-certificate=/var/contiv/auth_proxy_cert.pem
|
||||
- --data-store-address=$(CONTIV_ETCD)
|
||||
- --netmaster-address=127.0.0.1:{{ contiv_netmaster_port }}
|
||||
env:
|
||||
- name: NO_NETMASTER_STARTUP_CHECK
|
||||
value: "0"
|
||||
- name: CONTIV_ETCD
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: contiv-config
|
||||
key: cluster_store
|
||||
securityContext:
|
||||
privileged: false
|
||||
volumeMounts:
|
||||
- mountPath: /var/contiv
|
||||
name: var-contiv
|
||||
readOnly: false
|
||||
volumes:
|
||||
- name: var-contiv
|
||||
hostPath:
|
||||
path: /var/contiv
|
29
roles/network_plugin/contiv/templates/contiv-config.yml.j2
Normal file
29
roles/network_plugin/contiv/templates/contiv-config.yml.j2
Normal file
|
@ -0,0 +1,29 @@
|
|||
# This ConfigMap is used to configure a self-hosted Contiv installation.
|
||||
# It can be used with an external cluster store(etcd or consul) or used
|
||||
# with the etcd instance being installed as contiv-etcd
|
||||
kind: ConfigMap
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: contiv-config
|
||||
namespace: {{ system_namespace }}
|
||||
data:
|
||||
# The location of your cluster store. This is set to the
|
||||
# avdertise-client value below from the contiv-etcd service.
|
||||
# Change it to an external etcd/consul instance if required.
|
||||
cluster_store: "etcd://127.0.0.1:{{ contiv_etcd_listen_port }}"
|
||||
# The CNI network configuration to install on each node.
|
||||
cni_config: |-
|
||||
{
|
||||
"cniVersion": "{{ contiv_cni_version }}",
|
||||
"name": "contiv-net",
|
||||
"type": "contivk8s"
|
||||
}
|
||||
config: |-
|
||||
{
|
||||
"K8S_API_SERVER": "{{ kube_apiserver_endpoint }}",
|
||||
"K8S_CA": "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt",
|
||||
"K8S_KEY": "",
|
||||
"K8S_CERT": "",
|
||||
"K8S_TOKEN": "",
|
||||
"SVC_SUBNET": "{{ kube_service_addresses }}"
|
||||
}
|
22
roles/network_plugin/contiv/templates/contiv-etcd.env.j2
Normal file
22
roles/network_plugin/contiv/templates/contiv-etcd.env.j2
Normal file
|
@ -0,0 +1,22 @@
|
|||
# contiv etcd config
|
||||
{% if inventory_hostname in groups['kube-master'] %}
|
||||
export ETCD_DATA_DIR=/var/lib/etcd/contiv-data
|
||||
export ETCD_ADVERTISE_CLIENT_URLS={{ contiv_etcd_ad_urls }}
|
||||
export ETCD_INITIAL_ADVERTISE_PEER_URLS={{ contiv_etcd_peer_urls }}
|
||||
export ETCD_LISTEN_PEER_URLS={{ contiv_etcd_peer_urls }}
|
||||
export ETCD_LISTEN_CLIENT_URLS={{ contiv_etcd_listen_urls | join(",") }}
|
||||
export ETCD_NAME=
|
||||
{%- for host in groups['kube-master'] -%}
|
||||
{%- if host == inventory_hostname -%}
|
||||
contiv_etcd{{ loop.index }}
|
||||
{%- endif %}
|
||||
{%- endfor %}
|
||||
|
||||
{% else %}
|
||||
export ETCD_LISTEN_CLIENT_URLS=http://127.0.0.1:{{ contiv_etcd_listen_port }}
|
||||
export ETCD_PROXY=on
|
||||
{% endif %}
|
||||
export ETCD_INITIAL_CLUSTER=
|
||||
{%- for host in groups['kube-master'] -%}
|
||||
contiv_etcd{{ loop.index }}=http://{{ hostvars[host]['ip'] | default(hostvars[host].ansible_default_ipv4['address']) }}:{{ contiv_etcd_peer_port }},
|
||||
{%- endfor -%}
|
44
roles/network_plugin/contiv/templates/contiv-etcd.yml.j2
Normal file
44
roles/network_plugin/contiv/templates/contiv-etcd.yml.j2
Normal file
|
@ -0,0 +1,44 @@
|
|||
---
|
||||
kind: DaemonSet
|
||||
apiVersion: extensions/v1beta1
|
||||
metadata:
|
||||
name: contiv-etcd
|
||||
namespace: {{ system_namespace }}
|
||||
labels:
|
||||
k8s-app: contiv-etcd
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: contiv-etcd
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: contiv-etcd
|
||||
annotations:
|
||||
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||
spec:
|
||||
hostNetwork: true
|
||||
hostPID: true
|
||||
tolerations:
|
||||
- key: node-role.kubernetes.io/master
|
||||
effect: NoSchedule
|
||||
containers:
|
||||
- name: contiv-etcd
|
||||
image: {{ etcd_image_repo }}:{{ etcd_image_tag }}
|
||||
command: ["sh","-c"]
|
||||
args:
|
||||
- '. {{ contiv_etcd_conf_dir }}/contiv-etcd.env && /usr/local/bin/etcd'
|
||||
volumeMounts:
|
||||
- name: etc-contiv-etcd
|
||||
mountPath: {{ contiv_etcd_conf_dir }}
|
||||
- name: var-lib-etcd-contiv-data
|
||||
mountPath: {{ contiv_etcd_data_dir }}
|
||||
securityContext:
|
||||
privileged: true
|
||||
volumes:
|
||||
- name: etc-contiv-etcd
|
||||
hostPath:
|
||||
path: {{ contiv_etcd_conf_dir }}
|
||||
- name: var-lib-etcd-contiv-data
|
||||
hostPath:
|
||||
path: {{ contiv_etcd_data_dir }}
|
|
@ -0,0 +1,18 @@
|
|||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: contiv-netmaster
|
||||
namespace: {{ system_namespace }}
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
- extensions
|
||||
resources:
|
||||
- pods
|
||||
- nodes
|
||||
- namespaces
|
||||
- networkpolicies
|
||||
verbs:
|
||||
- watch
|
||||
- list
|
||||
- update
|
|
@ -0,0 +1,12 @@
|
|||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: contiv-netmaster
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: contiv-netmaster
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: contiv-netmaster
|
||||
namespace: {{ system_namespace }}
|
|
@ -0,0 +1,7 @@
|
|||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: contiv-netmaster
|
||||
namespace: {{ system_namespace }}
|
||||
labels:
|
||||
kubernetes.io/cluster-service: "true"
|
|
@ -0,0 +1,90 @@
|
|||
# This manifest deploys the Contiv API Server on Kubernetes.
|
||||
kind: DaemonSet
|
||||
apiVersion: extensions/v1beta1
|
||||
metadata:
|
||||
name: contiv-netmaster
|
||||
namespace: {{ system_namespace }}
|
||||
labels:
|
||||
k8s-app: contiv-netmaster
|
||||
spec:
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
template:
|
||||
metadata:
|
||||
name: contiv-netmaster
|
||||
namespace: {{ system_namespace }}
|
||||
labels:
|
||||
k8s-app: contiv-netmaster
|
||||
annotations:
|
||||
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||
spec:
|
||||
# The netmaster must run in the host network namespace so that
|
||||
# it isn't governed by policy that would prevent it from working.
|
||||
hostNetwork: true
|
||||
hostPID: true
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/master: "true"
|
||||
tolerations:
|
||||
- key: node-role.kubernetes.io/master
|
||||
effect: NoSchedule
|
||||
{% if rbac_enabled %}
|
||||
serviceAccountName: contiv-netmaster
|
||||
{% endif %}
|
||||
containers:
|
||||
- name: contiv-netmaster
|
||||
image: {{ contiv_image_repo }}:{{ contiv_image_tag }}
|
||||
args:
|
||||
- -m
|
||||
- -pkubernetes
|
||||
env:
|
||||
- name: CONTIV_ETCD
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: contiv-config
|
||||
key: cluster_store
|
||||
- name: CONTIV_CONFIG
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: contiv-config
|
||||
key: config
|
||||
securityContext:
|
||||
privileged: true
|
||||
volumeMounts:
|
||||
- mountPath: /etc/openvswitch
|
||||
name: etc-openvswitch
|
||||
readOnly: false
|
||||
- mountPath: /lib/modules
|
||||
name: lib-modules
|
||||
readOnly: false
|
||||
- mountPath: /var/run
|
||||
name: var-run
|
||||
readOnly: false
|
||||
- mountPath: /var/contiv
|
||||
name: var-contiv
|
||||
readOnly: false
|
||||
- mountPath: /etc/kubernetes/ssl
|
||||
name: etc-kubernetes-ssl
|
||||
readOnly: false
|
||||
- mountPath: /opt/cni/bin
|
||||
name: cni-bin-dir
|
||||
readOnly: false
|
||||
volumes:
|
||||
# Used by contiv-netmaster
|
||||
- name: etc-openvswitch
|
||||
hostPath:
|
||||
path: /etc/openvswitch
|
||||
- name: lib-modules
|
||||
hostPath:
|
||||
path: /lib/modules
|
||||
- name: var-run
|
||||
hostPath:
|
||||
path: /var/run
|
||||
- name: var-contiv
|
||||
hostPath:
|
||||
path: /var/contiv
|
||||
- name: etc-kubernetes-ssl
|
||||
hostPath:
|
||||
path: /etc/kubernetes/ssl
|
||||
- name: cni-bin-dir
|
||||
hostPath:
|
||||
path: /opt/cni/bin
|
|
@ -0,0 +1,21 @@
|
|||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: contiv-netplugin
|
||||
namespace: {{ system_namespace }}
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
- extensions
|
||||
resources:
|
||||
- endpoints
|
||||
- nodes
|
||||
- namespaces
|
||||
- networkpolicies
|
||||
- pods
|
||||
- services
|
||||
verbs:
|
||||
- watch
|
||||
- list
|
||||
- update
|
||||
- get
|
|
@ -0,0 +1,12 @@
|
|||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: contiv-netplugin
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: contiv-netplugin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: contiv-netplugin
|
||||
namespace: {{ system_namespace }}
|
|
@ -0,0 +1,7 @@
|
|||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: contiv-netplugin
|
||||
namespace: {{ system_namespace }}
|
||||
labels:
|
||||
kubernetes.io/cluster-service: "true"
|
116
roles/network_plugin/contiv/templates/contiv-netplugin.yml.j2
Normal file
116
roles/network_plugin/contiv/templates/contiv-netplugin.yml.j2
Normal file
|
@ -0,0 +1,116 @@
|
|||
# This manifest installs contiv-netplugin container, as well
|
||||
# as the Contiv CNI plugins and network config on
|
||||
# each master and worker node in a Kubernetes cluster.
|
||||
kind: DaemonSet
|
||||
apiVersion: extensions/v1beta1
|
||||
metadata:
|
||||
name: contiv-netplugin
|
||||
namespace: {{ system_namespace }}
|
||||
labels:
|
||||
k8s-app: contiv-netplugin
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: contiv-netplugin
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: contiv-netplugin
|
||||
annotations:
|
||||
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||
spec:
|
||||
hostNetwork: true
|
||||
hostPID: true
|
||||
tolerations:
|
||||
- key: node-role.kubernetes.io/master
|
||||
effect: NoSchedule
|
||||
{% if rbac_enabled %}
|
||||
serviceAccountName: contiv-netplugin
|
||||
{% endif %}
|
||||
containers:
|
||||
# Runs netplugin container on each Kubernetes node. This
|
||||
# container programs network policy and routes on each
|
||||
# host.
|
||||
- name: contiv-netplugin
|
||||
image: {{ contiv_image_repo }}:{{ contiv_image_tag }}
|
||||
args:
|
||||
- -pkubernetes
|
||||
- -x
|
||||
env:
|
||||
- name: VLAN_IF
|
||||
value: {{ contiv_vlan_interface }}
|
||||
- name: VTEP_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
- name: CONTIV_ETCD
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: contiv-config
|
||||
key: cluster_store
|
||||
- name: CONTIV_CNI_CONFIG
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: contiv-config
|
||||
key: cni_config
|
||||
- name: CONTIV_CONFIG
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: contiv-config
|
||||
key: config
|
||||
securityContext:
|
||||
privileged: true
|
||||
volumeMounts:
|
||||
- mountPath: /etc/openvswitch
|
||||
name: etc-openvswitch
|
||||
readOnly: false
|
||||
- mountPath: /lib/modules
|
||||
name: lib-modules
|
||||
readOnly: false
|
||||
- mountPath: /var/run
|
||||
name: var-run
|
||||
readOnly: false
|
||||
- mountPath: /var/contiv
|
||||
name: var-contiv
|
||||
readOnly: false
|
||||
- mountPath: /etc/kubernetes/pki
|
||||
name: etc-kubernetes-pki
|
||||
readOnly: false
|
||||
- mountPath: /etc/kubernetes/ssl
|
||||
name: etc-kubernetes-ssl
|
||||
readOnly: false
|
||||
- mountPath: /opt/cni/bin
|
||||
name: cni-bin-dir
|
||||
readOnly: false
|
||||
- mountPath: /etc/cni/net.d/
|
||||
name: etc-cni-dir
|
||||
readOnly: false
|
||||
volumes:
|
||||
# Used by contiv-netplugin
|
||||
- name: etc-openvswitch
|
||||
hostPath:
|
||||
path: /etc/openvswitch
|
||||
- name: lib-modules
|
||||
hostPath:
|
||||
path: /lib/modules
|
||||
- name: var-run
|
||||
hostPath:
|
||||
path: /var/run
|
||||
- name: var-contiv
|
||||
hostPath:
|
||||
path: /var/contiv
|
||||
- name: etc-kubernetes-pki
|
||||
hostPath:
|
||||
path: /etc/kubernetes/pki
|
||||
- name: etc-kubernetes-ssl
|
||||
hostPath:
|
||||
path: /etc/kubernetes/ssl
|
||||
# Used to install CNI.
|
||||
- name: cni-bin-dir
|
||||
hostPath:
|
||||
path: /opt/cni/bin
|
||||
- name: etc-cni-dir
|
||||
hostPath:
|
||||
path: /etc/cni/net.d/
|
|
@ -20,5 +20,10 @@ dependencies:
|
|||
tags:
|
||||
- canal
|
||||
|
||||
- role: network_plugin/contiv
|
||||
when: kube_network_plugin == 'contiv'
|
||||
tags:
|
||||
- contiv
|
||||
|
||||
- role: network_plugin/cloud
|
||||
when: kube_network_plugin == 'cloud'
|
||||
|
|
10
tests/files/ubuntu-contiv-sep.yml
Normal file
10
tests/files/ubuntu-contiv-sep.yml
Normal file
|
@ -0,0 +1,10 @@
|
|||
# Instance settings
|
||||
cloud_image_family: ubuntu-1604-lts
|
||||
cloud_region: us-west1-a
|
||||
mode: separate
|
||||
|
||||
# Deployment settings
|
||||
kube_network_plugin: contiv
|
||||
deploy_netchecker: true
|
||||
kubedns_min_replicas: 1
|
||||
cloud_provider: gce
|
Loading…
Reference in a new issue