Merge branch 'master' into multi-arch-support
This commit is contained in:
commit
26bf719a02
191 changed files with 2050 additions and 2634 deletions
2
.gitignore
vendored
2
.gitignore
vendored
|
@ -12,9 +12,9 @@ temp
|
|||
*.tfstate
|
||||
*.tfstate.backup
|
||||
contrib/terraform/aws/credentials.tfvars
|
||||
**/*.sw[pon]
|
||||
/ssh-bastion.conf
|
||||
**/*.sw[pon]
|
||||
*~
|
||||
vagrant/
|
||||
|
||||
# Byte-compiled / optimized / DLL files
|
||||
|
|
|
@ -93,7 +93,7 @@ before_script:
|
|||
# Check out latest tag if testing upgrade
|
||||
# Uncomment when gitlab kubespray repo has tags
|
||||
#- test "${UPGRADE_TEST}" != "false" && git fetch --all && git checkout $(git describe --tags $(git rev-list --tags --max-count=1))
|
||||
- test "${UPGRADE_TEST}" != "false" && git checkout f7d52564aad2ff8e337634951beb4a881c0e8aa6
|
||||
- test "${UPGRADE_TEST}" != "false" && git checkout 8b3ce6e418ccf48171eb5b3888ee1af84f8d71ba
|
||||
# Checkout the CI vars file so it is available
|
||||
- test "${UPGRADE_TEST}" != "false" && git checkout "${CI_BUILD_REF}" tests/files/${CI_JOB_NAME}.yml
|
||||
# Workaround https://github.com/kubernetes-incubator/kubespray/issues/2021
|
||||
|
|
12
OWNERS
12
OWNERS
|
@ -1,9 +1,7 @@
|
|||
# See the OWNERS file documentation:
|
||||
# https://github.com/kubernetes/kubernetes/blob/master/docs/devel/owners.md
|
||||
# https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md
|
||||
|
||||
owners:
|
||||
- Smana
|
||||
- ant31
|
||||
- bogdando
|
||||
- mattymo
|
||||
- rsmitty
|
||||
approvers:
|
||||
- kubespray-approvers
|
||||
reviewers:
|
||||
- kubespray-reviewers
|
||||
|
|
17
OWNERS_ALIASES
Normal file
17
OWNERS_ALIASES
Normal file
|
@ -0,0 +1,17 @@
|
|||
aliases:
|
||||
kubespray-approvers:
|
||||
- ant31
|
||||
- mattymo
|
||||
- atoms
|
||||
- chadswen
|
||||
- rsmitty
|
||||
- bogdando
|
||||
- bradbeam
|
||||
- woopstar
|
||||
- riverzhang
|
||||
- holser
|
||||
- smana
|
||||
kubespray-reviewers:
|
||||
- jjungnickel
|
||||
- archifleks
|
||||
- chapsuk
|
62
README.md
62
README.md
|
@ -5,11 +5,11 @@ Deploy a Production Ready Kubernetes Cluster
|
|||
|
||||
If you have questions, join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
|
||||
|
||||
- Can be deployed on **AWS, GCE, Azure, OpenStack, vSphere or Baremetal**
|
||||
- **High available** cluster
|
||||
- **Composable** (Choice of the network plugin for instance)
|
||||
- Support most popular **Linux distributions**
|
||||
- **Continuous integration tests**
|
||||
- Can be deployed on **AWS, GCE, Azure, OpenStack, vSphere or Baremetal**
|
||||
- **Highly available** cluster
|
||||
- **Composable** (Choice of the network plugin for instance)
|
||||
- Supports most popular **Linux distributions**
|
||||
- **Continuous integration tests**
|
||||
|
||||
Quick Start
|
||||
-----------
|
||||
|
@ -17,6 +17,7 @@ Quick Start
|
|||
To deploy the cluster you can use :
|
||||
|
||||
### Ansible
|
||||
|
||||
# Install dependencies from ``requirements.txt``
|
||||
sudo pip install -r requirements.txt
|
||||
|
||||
|
@ -36,19 +37,16 @@ To deploy the cluster you can use :
|
|||
|
||||
### Vagrant
|
||||
|
||||
For Vagrant we need to install python dependencies for provisioning tasks.\
|
||||
Check if Python and pip are installed:
|
||||
```sh
|
||||
python -v && pip -v
|
||||
```
|
||||
For Vagrant we need to install python dependencies for provisioning tasks.
|
||||
Check if Python and pip are installed:
|
||||
|
||||
If this returns the version of the software, you're good to go. If not, download and install Python from here https://www.python.org/downloads/source/
|
||||
Install the necessary requirements
|
||||
python -V && pip -V
|
||||
|
||||
```sh
|
||||
sudo pip install -r requirements.txt
|
||||
vagrant up
|
||||
```
|
||||
If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/>
|
||||
Install the necessary requirements
|
||||
|
||||
sudo pip install -r requirements.txt
|
||||
vagrant up
|
||||
|
||||
Documents
|
||||
---------
|
||||
|
@ -88,19 +86,25 @@ Supported Linux Distributions
|
|||
|
||||
Note: Upstart/SysV init based OS types are not supported.
|
||||
|
||||
Versions of supported components
|
||||
--------------------------------
|
||||
Supported Components
|
||||
--------------------
|
||||
|
||||
- [kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.10.2
|
||||
- [etcd](https://github.com/coreos/etcd/releases) v3.2.16
|
||||
- [flanneld](https://github.com/coreos/flannel/releases) v0.10.0
|
||||
- [calico](https://docs.projectcalico.org/v2.6/releases/) v2.6.8
|
||||
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
|
||||
- [cilium](https://github.com/cilium/cilium) v1.0.0-rc8
|
||||
- [contiv](https://github.com/contiv/install/releases) v1.1.7
|
||||
- [weave](http://weave.works/) v2.3.0
|
||||
- [docker](https://www.docker.com/) v17.03 (see note)
|
||||
- [rkt](https://coreos.com/rkt/docs/latest/) v1.21.0 (see Note 2)
|
||||
- Core
|
||||
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.11.2
|
||||
- [etcd](https://github.com/coreos/etcd) v3.2.18
|
||||
- [docker](https://www.docker.com/) v17.03 (see note)
|
||||
- [rkt](https://github.com/rkt/rkt) v1.21.0 (see Note 2)
|
||||
- Network Plugin
|
||||
- [calico](https://github.com/projectcalico/calico) v2.6.8
|
||||
- [canal](https://github.com/projectcalico/canal) (given calico/flannel versions)
|
||||
- [cilium](https://github.com/cilium/cilium) v1.1.2
|
||||
- [contiv](https://github.com/contiv/install) v1.1.7
|
||||
- [flanneld](https://github.com/coreos/flannel) v0.10.0
|
||||
- [weave](https://github.com/weaveworks/weave) v2.4.0
|
||||
- Application
|
||||
- [cephfs-provisioner](https://github.com/kubernetes-incubator/external-storage) v1.1.0-k8s1.10
|
||||
- [cert-manager](https://github.com/jetstack/cert-manager) v0.4.1
|
||||
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx) v0.18.0
|
||||
|
||||
Note: kubernetes doesn't support newer docker versions. Among other things kubelet currently breaks on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
|
||||
|
||||
|
@ -135,7 +139,7 @@ You can choose between 6 network plugins. (default: `calico`, except Vagrant use
|
|||
|
||||
- [canal](https://github.com/projectcalico/canal): a composition of calico and flannel plugins.
|
||||
|
||||
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
|
||||
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
|
||||
|
||||
- [contiv](docs/contiv.md): supports vlan, vxlan, bgp and Cisco SDN networking. This plugin is able to
|
||||
apply firewall policies, segregate containers in multiple network and bridging pods onto physical networks.
|
||||
|
|
4
Vagrantfile
vendored
4
Vagrantfile
vendored
|
@ -44,6 +44,8 @@ $kube_node_instances_with_disks = false
|
|||
$kube_node_instances_with_disks_size = "20G"
|
||||
$kube_node_instances_with_disks_number = 2
|
||||
|
||||
$playbook = "cluster.yml"
|
||||
|
||||
$local_release_dir = "/vagrant/temp"
|
||||
|
||||
host_vars = {}
|
||||
|
@ -157,7 +159,7 @@ Vagrant.configure("2") do |config|
|
|||
# when all the machines are up and ready.
|
||||
if i == $num_instances
|
||||
config.vm.provision "ansible" do |ansible|
|
||||
ansible.playbook = "cluster.yml"
|
||||
ansible.playbook = $playbook
|
||||
if File.exist?(File.join(File.dirname($inventory), "hosts"))
|
||||
ansible.inventory_path = $inventory
|
||||
end
|
||||
|
|
|
@ -37,7 +37,7 @@
|
|||
- role: rkt
|
||||
tags: rkt
|
||||
when: "'rkt' in [etcd_deployment_type, kubelet_deployment_type, vault_deployment_type]"
|
||||
- { role: download, tags: download, skip_downloads: false }
|
||||
- { role: download, tags: download, when: "not skip_downloads" }
|
||||
environment: "{{proxy_env}}"
|
||||
|
||||
- hosts: etcd:k8s-cluster:vault:calico-rr
|
||||
|
@ -51,7 +51,7 @@
|
|||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||
roles:
|
||||
- { role: kubespray-defaults}
|
||||
- { role: etcd, tags: etcd, etcd_cluster_setup: true, etcd_events_cluster_setup: true }
|
||||
- { role: etcd, tags: etcd, etcd_cluster_setup: true, etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}" }
|
||||
|
||||
- hosts: k8s-cluster:calico-rr
|
||||
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
|
||||
|
|
|
@ -9,8 +9,8 @@ Resource Group. It will not install Kubernetes itself, this has to be done in a
|
|||
|
||||
## Requirements
|
||||
|
||||
- [Install azure-cli](https://docs.microsoft.com/en-us/azure/xplat-cli-install)
|
||||
- [Login with azure-cli](https://docs.microsoft.com/en-us/azure/xplat-cli-connect)
|
||||
- [Install azure-cli](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest)
|
||||
- [Login with azure-cli](https://docs.microsoft.com/en-us/cli/azure/authenticate-azure-cli?view=azure-cli-latest)
|
||||
- Dedicated Resource Group created in the Azure Portal or through azure-cli
|
||||
|
||||
## Configuration through group_vars/all
|
||||
|
|
|
@ -1 +1 @@
|
|||
../../../inventory/group_vars
|
||||
../../../inventory/local/group_vars
|
|
@ -2,7 +2,7 @@
|
|||
# For Ubuntu.
|
||||
glusterfs_default_release: ""
|
||||
glusterfs_ppa_use: yes
|
||||
glusterfs_ppa_version: "3.8"
|
||||
glusterfs_ppa_version: "4.1"
|
||||
|
||||
# Gluster configuration.
|
||||
gluster_mount_dir: /mnt/gluster
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
# For Ubuntu.
|
||||
glusterfs_default_release: ""
|
||||
glusterfs_ppa_use: yes
|
||||
glusterfs_ppa_version: "3.8"
|
||||
glusterfs_ppa_version: "3.12"
|
||||
|
||||
# Gluster configuration.
|
||||
gluster_mount_dir: /mnt/gluster
|
||||
|
|
|
@ -1,2 +1,2 @@
|
|||
---
|
||||
glusterfs_daemon: glusterfs-server
|
||||
glusterfs_daemon: glusterd
|
||||
|
|
|
@ -17,10 +17,10 @@ This project will create:
|
|||
- Export the variables for your AWS credentials or edit `credentials.tfvars`:
|
||||
|
||||
```
|
||||
export AWS_ACCESS_KEY_ID="www"
|
||||
export AWS_SECRET_ACCESS_KEY ="xxx"
|
||||
export AWS_SSH_KEY_NAME="yyy"
|
||||
export AWS_DEFAULT_REGION="zzz"
|
||||
export TF_VAR_AWS_ACCESS_KEY_ID="www"
|
||||
export TF_VAR_AWS_SECRET_ACCESS_KEY ="xxx"
|
||||
export TF_VAR_AWS_SSH_KEY_NAME="yyy"
|
||||
export TF_VAR_AWS_DEFAULT_REGION="zzz"
|
||||
```
|
||||
- Rename `contrib/terraform/aws/terraform.tfvars.example` to `terraform.tfvars`
|
||||
|
||||
|
|
|
@ -181,7 +181,7 @@ data "template_file" "inventory" {
|
|||
|
||||
resource "null_resource" "inventories" {
|
||||
provisioner "local-exec" {
|
||||
command = "echo '${data.template_file.inventory.rendered}' > ../../../inventory/hosts"
|
||||
command = "echo '${data.template_file.inventory.rendered}' > ${var.inventory_file}"
|
||||
}
|
||||
|
||||
triggers {
|
||||
|
|
|
@ -31,3 +31,5 @@ default_tags = {
|
|||
# Env = "devtest"
|
||||
# Product = "kubernetes"
|
||||
}
|
||||
|
||||
inventory_file = "../../../inventory/hosts"
|
||||
|
|
|
@ -103,3 +103,7 @@ variable "default_tags" {
|
|||
description = "Default tags for all resources"
|
||||
type = "map"
|
||||
}
|
||||
|
||||
variable "inventory_file" {
|
||||
description = "Where to store the generated inventory file"
|
||||
}
|
||||
|
|
|
@ -32,7 +32,11 @@ floating IP addresses or not.
|
|||
- Kubernetes worker nodes
|
||||
|
||||
Note that the Ansible script will report an invalid configuration if you wind up
|
||||
with an even number of etcd instances since that is not a valid configuration.
|
||||
with an even number of etcd instances since that is not a valid configuration. This
|
||||
restriction includes standalone etcd nodes that are deployed in a cluster along with
|
||||
master nodes with etcd replicas. As an example, if you have three master nodes with
|
||||
etcd replicas and three standalone etcd nodes, the script will fail since there are
|
||||
now six total etcd replicas.
|
||||
|
||||
### GlusterFS
|
||||
The Terraform configuration supports provisioning of an optional GlusterFS
|
||||
|
@ -219,6 +223,7 @@ For your cluster, edit `inventory/$CLUSTER/cluster.tf`.
|
|||
|`number_of_gfs_nodes_no_floating_ip` | Number of gluster servers to provision. |
|
||||
| `gfs_volume_size_in_gb` | Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks |
|
||||
|`supplementary_master_groups` | To add ansible groups to the masters, such as `kube-node` for tainting them as nodes, empty by default. |
|
||||
|`supplementary_node_groups` | To add ansible groups to the nodes, such as `kube-ingress` for running ingress controller pods, empty by default. |
|
||||
|
||||
#### Terraform state files
|
||||
|
||||
|
|
|
@ -3,6 +3,7 @@ module "network" {
|
|||
|
||||
external_net = "${var.external_net}"
|
||||
network_name = "${var.network_name}"
|
||||
subnet_cidr = "${var.subnet_cidr}"
|
||||
cluster_name = "${var.cluster_name}"
|
||||
dns_nameservers = "${var.dns_nameservers}"
|
||||
}
|
||||
|
@ -24,6 +25,7 @@ module "compute" {
|
|||
source = "modules/compute"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
az_list = "${var.az_list}"
|
||||
number_of_k8s_masters = "${var.number_of_k8s_masters}"
|
||||
number_of_k8s_masters_no_etcd = "${var.number_of_k8s_masters_no_etcd}"
|
||||
number_of_etcd = "${var.number_of_etcd}"
|
||||
|
@ -49,6 +51,7 @@ module "compute" {
|
|||
k8s_node_fips = "${module.ips.k8s_node_fips}"
|
||||
bastion_fips = "${module.ips.bastion_fips}"
|
||||
supplementary_master_groups = "${var.supplementary_master_groups}"
|
||||
supplementary_node_groups = "${var.supplementary_node_groups}"
|
||||
|
||||
network_id = "${module.network.router_id}"
|
||||
}
|
||||
|
|
|
@ -59,6 +59,17 @@ resource "openstack_compute_secgroup_v2" "k8s" {
|
|||
self = true
|
||||
}
|
||||
}
|
||||
resource "openstack_compute_secgroup_v2" "worker" {
|
||||
name = "${var.cluster_name}-k8s-worker"
|
||||
description = "${var.cluster_name} - Kubernetes worker nodes"
|
||||
|
||||
rule {
|
||||
ip_protocol = "tcp"
|
||||
from_port = "30000"
|
||||
to_port = "32767"
|
||||
cidr = "0.0.0.0/0"
|
||||
}
|
||||
}
|
||||
|
||||
resource "openstack_compute_instance_v2" "bastion" {
|
||||
name = "${var.cluster_name}-bastion-${count.index+1}"
|
||||
|
@ -91,6 +102,7 @@ resource "openstack_compute_instance_v2" "bastion" {
|
|||
resource "openstack_compute_instance_v2" "k8s_master" {
|
||||
name = "${var.cluster_name}-k8s-master-${count.index+1}"
|
||||
count = "${var.number_of_k8s_masters}"
|
||||
availability_zone = "${element(var.az_list, count.index)}"
|
||||
image_name = "${var.image}"
|
||||
flavor_id = "${var.flavor_k8s_master}"
|
||||
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
||||
|
@ -120,6 +132,7 @@ resource "openstack_compute_instance_v2" "k8s_master" {
|
|||
resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
|
||||
name = "${var.cluster_name}-k8s-master-ne-${count.index+1}"
|
||||
count = "${var.number_of_k8s_masters_no_etcd}"
|
||||
availability_zone = "${element(var.az_list, count.index)}"
|
||||
image_name = "${var.image}"
|
||||
flavor_id = "${var.flavor_k8s_master}"
|
||||
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
||||
|
@ -148,6 +161,7 @@ resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
|
|||
resource "openstack_compute_instance_v2" "etcd" {
|
||||
name = "${var.cluster_name}-etcd-${count.index+1}"
|
||||
count = "${var.number_of_etcd}"
|
||||
availability_zone = "${element(var.az_list, count.index)}"
|
||||
image_name = "${var.image}"
|
||||
flavor_id = "${var.flavor_etcd}"
|
||||
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
||||
|
@ -169,6 +183,7 @@ resource "openstack_compute_instance_v2" "etcd" {
|
|||
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
|
||||
name = "${var.cluster_name}-k8s-master-nf-${count.index+1}"
|
||||
count = "${var.number_of_k8s_masters_no_floating_ip}"
|
||||
availability_zone = "${element(var.az_list, count.index)}"
|
||||
image_name = "${var.image}"
|
||||
flavor_id = "${var.flavor_k8s_master}"
|
||||
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
||||
|
@ -193,6 +208,7 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
|
|||
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
|
||||
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index+1}"
|
||||
count = "${var.number_of_k8s_masters_no_floating_ip_no_etcd}"
|
||||
availability_zone = "${element(var.az_list, count.index)}"
|
||||
image_name = "${var.image}"
|
||||
flavor_id = "${var.flavor_k8s_master}"
|
||||
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
||||
|
@ -216,6 +232,7 @@ resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
|
|||
resource "openstack_compute_instance_v2" "k8s_node" {
|
||||
name = "${var.cluster_name}-k8s-node-${count.index+1}"
|
||||
count = "${var.number_of_k8s_nodes}"
|
||||
availability_zone = "${element(var.az_list, count.index)}"
|
||||
image_name = "${var.image}"
|
||||
flavor_id = "${var.flavor_k8s_node}"
|
||||
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
||||
|
@ -226,12 +243,13 @@ resource "openstack_compute_instance_v2" "k8s_node" {
|
|||
|
||||
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}",
|
||||
"${openstack_compute_secgroup_v2.bastion.name}",
|
||||
"${openstack_compute_secgroup_v2.worker.name}",
|
||||
"default",
|
||||
]
|
||||
|
||||
metadata = {
|
||||
ssh_user = "${var.ssh_user}"
|
||||
kubespray_groups = "kube-node,k8s-cluster"
|
||||
kubespray_groups = "kube-node,k8s-cluster,${var.supplementary_node_groups}"
|
||||
depends_on = "${var.network_id}"
|
||||
}
|
||||
|
||||
|
@ -244,6 +262,7 @@ resource "openstack_compute_instance_v2" "k8s_node" {
|
|||
resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
|
||||
name = "${var.cluster_name}-k8s-node-nf-${count.index+1}"
|
||||
count = "${var.number_of_k8s_nodes_no_floating_ip}"
|
||||
availability_zone = "${element(var.az_list, count.index)}"
|
||||
image_name = "${var.image}"
|
||||
flavor_id = "${var.flavor_k8s_node}"
|
||||
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
||||
|
@ -253,12 +272,13 @@ resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
|
|||
}
|
||||
|
||||
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}",
|
||||
"${openstack_compute_secgroup_v2.worker.name}",
|
||||
"default",
|
||||
]
|
||||
|
||||
metadata = {
|
||||
ssh_user = "${var.ssh_user}"
|
||||
kubespray_groups = "kube-node,k8s-cluster,no-floating"
|
||||
kubespray_groups = "kube-node,k8s-cluster,no-floating,${var.supplementary_node_groups}"
|
||||
depends_on = "${var.network_id}"
|
||||
}
|
||||
|
||||
|
@ -292,6 +312,7 @@ resource "openstack_blockstorage_volume_v2" "glusterfs_volume" {
|
|||
resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
|
||||
name = "${var.cluster_name}-gfs-node-nf-${count.index+1}"
|
||||
count = "${var.number_of_gfs_nodes_no_floating_ip}"
|
||||
availability_zone = "${element(var.az_list, count.index)}"
|
||||
image_name = "${var.image_gfs}"
|
||||
flavor_id = "${var.flavor_gfs_node}"
|
||||
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
|
||||
|
|
|
@ -1,5 +1,9 @@
|
|||
variable "cluster_name" {}
|
||||
|
||||
variable "az_list" {
|
||||
type = "list"
|
||||
}
|
||||
|
||||
variable "number_of_k8s_masters" {}
|
||||
|
||||
variable "number_of_k8s_masters_no_etcd" {}
|
||||
|
@ -59,3 +63,7 @@ variable "bastion_fips" {
|
|||
variable "supplementary_master_groups" {
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "supplementary_node_groups" {
|
||||
default = ""
|
||||
}
|
||||
|
|
|
@ -12,7 +12,7 @@ resource "openstack_networking_network_v2" "k8s" {
|
|||
resource "openstack_networking_subnet_v2" "k8s" {
|
||||
name = "${var.cluster_name}-internal-network"
|
||||
network_id = "${openstack_networking_network_v2.k8s.id}"
|
||||
cidr = "10.0.0.0/24"
|
||||
cidr = "${var.subnet_cidr}"
|
||||
ip_version = 4
|
||||
dns_nameservers = "${var.dns_nameservers}"
|
||||
}
|
||||
|
|
|
@ -7,3 +7,5 @@ variable "cluster_name" {}
|
|||
variable "dns_nameservers" {
|
||||
type = "list"
|
||||
}
|
||||
|
||||
variable "subnet_cidr" {}
|
||||
|
|
|
@ -41,5 +41,6 @@ number_of_k8s_nodes_no_floating_ip = 4
|
|||
# networking
|
||||
network_name = "<network>"
|
||||
external_net = "<UUID>"
|
||||
subnet_cidr = "<cidr>"
|
||||
floatingip_pool = "<pool>"
|
||||
|
||||
|
|
|
@ -2,6 +2,12 @@ variable "cluster_name" {
|
|||
default = "example"
|
||||
}
|
||||
|
||||
variable "az_list" {
|
||||
description = "List of Availability Zones available in your OpenStack cluster"
|
||||
type = "list"
|
||||
default = ["nova"]
|
||||
}
|
||||
|
||||
variable "number_of_bastions" {
|
||||
default = 1
|
||||
}
|
||||
|
@ -97,6 +103,12 @@ variable "network_name" {
|
|||
default = "internal"
|
||||
}
|
||||
|
||||
variable "subnet_cidr" {
|
||||
description = "Subnet CIDR block."
|
||||
type = "string"
|
||||
default = "10.0.0.0/24"
|
||||
}
|
||||
|
||||
variable "dns_nameservers" {
|
||||
description = "An array of DNS name server names used by hosts in this subnet."
|
||||
type = "list"
|
||||
|
@ -116,3 +128,8 @@ variable "supplementary_master_groups" {
|
|||
description = "supplementary kubespray ansible groups for masters, such kube-node"
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "supplementary_node_groups" {
|
||||
description = "supplementary kubespray ansible groups for worker nodes, such as kube-ingress"
|
||||
default = ""
|
||||
}
|
||||
|
|
|
@ -706,6 +706,10 @@ def query_list(hosts):
|
|||
|
||||
for name, attrs, hostgroups in hosts:
|
||||
for group in set(hostgroups):
|
||||
# Ansible 2.6.2 stopped supporting empty group names: https://github.com/ansible/ansible/pull/42584/commits/d4cd474b42ed23d8f8aabb2a7f84699673852eaf
|
||||
# Empty group name defaults to "all" in Ansible < 2.6.2 so we alter empty group names to "all"
|
||||
if not group: group = "all"
|
||||
|
||||
groups[group].setdefault('hosts', [])
|
||||
groups[group]['hosts'].append(name)
|
||||
|
||||
|
|
|
@ -52,13 +52,13 @@ You can modify how Kubespray sets up DNS for your cluster with the variables ``d
|
|||
## dns_mode
|
||||
``dns_mode`` configures how Kubespray will setup cluster DNS. There are four modes available:
|
||||
|
||||
#### dnsmasq_kubedns (default)
|
||||
#### dnsmasq_kubedns
|
||||
This installs an additional dnsmasq DaemonSet which gives more flexibility and lifts some
|
||||
limitations (e.g. number of nameservers). Kubelet is instructed to use dnsmasq instead of kubedns/skydns.
|
||||
It is configured to forward all DNS queries belonging to cluster services to kubedns/skydns. All
|
||||
other queries are forwardet to the nameservers found in ``upstream_dns_servers`` or ``default_resolver``
|
||||
|
||||
#### kubedns
|
||||
#### kubedns (default)
|
||||
This does not install the dnsmasq DaemonSet and instructs kubelet to directly use kubedns/skydns for
|
||||
all queries.
|
||||
|
||||
|
|
|
@ -38,9 +38,9 @@ See more details in the [ansible guide](ansible.md).
|
|||
Adding nodes
|
||||
------------
|
||||
|
||||
You may want to add **worker** nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your masters. This is especially helpful when doing something like autoscaling your clusters.
|
||||
You may want to add worker, master or etcd nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your masters. This is especially helpful when doing something like autoscaling your clusters.
|
||||
|
||||
- Add the new worker node to your inventory under kube-node (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
|
||||
- Add the new worker node to your inventory in the appropriate group (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
|
||||
- Run the ansible-playbook command, substituting `scale.yml` for `cluster.yml`:
|
||||
|
||||
ansible-playbook -i inventory/mycluster/hosts.ini scale.yml -b -v \
|
||||
|
@ -51,11 +51,26 @@ Remove nodes
|
|||
|
||||
You may want to remove **worker** nodes to your existing cluster. This can be done by re-running the `remove-node.yml` playbook. First, all nodes will be drained, then stop some kubernetes services and delete some certificates, and finally execute the kubectl command to delete these nodes. This can be combined with the add node function, This is generally helpful when doing something like autoscaling your clusters. Of course if a node is not working, you can remove the node and install it again.
|
||||
|
||||
- Add worker nodes to the list under kube-node if you want to delete them (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
|
||||
- Run the ansible-playbook command, substituting `remove-node.yml`:
|
||||
Add worker nodes to the list under kube-node if you want to delete them (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
|
||||
|
||||
ansible-playbook -i inventory/mycluster/hosts.ini remove-node.yml -b -v \
|
||||
--private-key=~/.ssh/private_key
|
||||
|
||||
|
||||
We support two ways to select the nodes:
|
||||
|
||||
- Use `--extra-vars "node=<nodename>,<nodename2>"` to select the node you want to delete.
|
||||
```
|
||||
ansible-playbook -i inventory/mycluster/hosts.ini remove-node.yml -b -v \
|
||||
--private-key=~/.ssh/private_key
|
||||
--private-key=~/.ssh/private_key \
|
||||
--extra-vars "node=nodename,nodename2"
|
||||
```
|
||||
or
|
||||
- Use `--limit nodename,nodename2` to select the node
|
||||
```
|
||||
ansible-playbook -i inventory/mycluster/hosts.ini remove-node.yml -b -v \
|
||||
--private-key=~/.ssh/private_key \
|
||||
--limit nodename,nodename2"
|
||||
```
|
||||
|
||||
Connecting to Kubernetes
|
||||
|
|
|
@ -3,7 +3,7 @@ OpenStack
|
|||
|
||||
To deploy kubespray on [OpenStack](https://www.openstack.org/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'openstack'`.
|
||||
|
||||
After that make sure to source in your OpenStack credentials like you would do when using `nova-client` by using `source path/to/your/openstack-rc`.
|
||||
After that make sure to source in your OpenStack credentials like you would do when using `nova-client` or `neutron-client` by using `source path/to/your/openstack-rc` or `. path/to/your/openstack-rc`.
|
||||
|
||||
The next step is to make sure the hostnames in your `inventory` file are identical to your instance names in OpenStack.
|
||||
Otherwise [cinder](https://wiki.openstack.org/wiki/Cinder) won't work as expected.
|
||||
|
@ -12,35 +12,34 @@ Unless you are using calico you can now run the playbook.
|
|||
|
||||
**Additional step needed when using calico:**
|
||||
|
||||
Calico does not encapsulate all packages with the hosts ip addresses. Instead the packages will be routed with the PODs ip addresses directly.
|
||||
Calico does not encapsulate all packages with the hosts' ip addresses. Instead the packages will be routed with the PODs ip addresses directly.
|
||||
|
||||
OpenStack will filter and drop all packages from ips it does not know to prevent spoofing.
|
||||
|
||||
In order to make calico work on OpenStack you will need to tell OpenStack to allow calicos packages by allowing the network it uses.
|
||||
In order to make calico work on OpenStack you will need to tell OpenStack to allow calico's packages by allowing the network it uses.
|
||||
|
||||
First you will need the ids of your OpenStack instances that will run kubernetes:
|
||||
|
||||
nova list --tenant Your-Tenant
|
||||
openstack server list --project YOUR_PROJECT
|
||||
+--------------------------------------+--------+----------------------------------+--------+-------------+
|
||||
| ID | Name | Tenant ID | Status | Power State |
|
||||
+--------------------------------------+--------+----------------------------------+--------+-------------+
|
||||
| e1f48aad-df96-4bce-bf61-62ae12bf3f95 | k8s-1 | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running |
|
||||
| 725cd548-6ea3-426b-baaa-e7306d3c8052 | k8s-2 | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running |
|
||||
|
||||
Then you can use the instance ids to find the connected [neutron](https://wiki.openstack.org/wiki/Neutron) ports:
|
||||
Then you can use the instance ids to find the connected [neutron](https://wiki.openstack.org/wiki/Neutron) ports (though they are now configured through using OpenStack):
|
||||
|
||||
neutron port-list -c id -c device_id
|
||||
openstack port list -c id -c device_id --project YOUR_PROJECT
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| id | device_id |
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| 5662a4e0-e646-47f0-bf88-d80fbd2d99ef | e1f48aad-df96-4bce-bf61-62ae12bf3f95 |
|
||||
| e5ae2045-a1e1-4e99-9aac-4353889449a7 | 725cd548-6ea3-426b-baaa-e7306d3c8052 |
|
||||
|
||||
Given the port ids on the left, you can set the `allowed_address_pairs` in neutron.
|
||||
Note that you have to allow both of `kube_service_addresses` (default `10.233.0.0/18`)
|
||||
and `kube_pods_subnet` (default `10.233.64.0/18`.)
|
||||
Given the port ids on the left, you can set the two `allowed_address`(es) in OpenStack. Note that you have to allow both `kube_service_addresses` (default `10.233.0.0/18`) and `kube_pods_subnet` (default `10.233.64.0/18`.)
|
||||
|
||||
# allow kube_service_addresses and kube_pods_subnet network
|
||||
neutron port-update 5662a4e0-e646-47f0-bf88-d80fbd2d99ef --allowed_address_pairs list=true type=dict ip_address=10.233.0.0/18 ip_address=10.233.64.0/18
|
||||
neutron port-update e5ae2045-a1e1-4e99-9aac-4353889449a7 --allowed_address_pairs list=true type=dict ip_address=10.233.0.0/18 ip_address=10.233.64.0/18
|
||||
openstack port set 5662a4e0-e646-47f0-bf88-d80fbd2d99ef --allowed_address ip_address=10.233.0.0/18,ip_address=10.233.64.0/18
|
||||
openstack port set e5ae2045-a1e1-4e99-9aac-4353889449a7 --allowed_address ip_address=10.233.0.0/18,ip_address=10.233.64.0/18
|
||||
|
||||
Now you can finally run the playbook.
|
||||
|
|
|
@ -81,3 +81,61 @@ kubernetes-apps/rotate_tokens role, only pods in kube-system are destroyed and
|
|||
recreated. All other invalidated service account tokens are cleaned up
|
||||
automatically, but other pods are not deleted out of an abundance of caution
|
||||
for impact to user deployed pods.
|
||||
|
||||
### Component-based upgrades
|
||||
|
||||
A deployer may want to upgrade specific components in order to minimize risk
|
||||
or save time. This strategy is not covered by CI as of this writing, so it is
|
||||
not guaranteed to work.
|
||||
|
||||
These commands are useful only for upgrading fully-deployed, healthy, existing
|
||||
hosts. This will definitely not work for undeployed or partially deployed
|
||||
hosts.
|
||||
|
||||
Upgrade docker:
|
||||
|
||||
```
|
||||
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=docker
|
||||
```
|
||||
|
||||
Upgrade etcd:
|
||||
|
||||
```
|
||||
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=etcd
|
||||
```
|
||||
|
||||
Upgrade vault:
|
||||
|
||||
```
|
||||
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=vault
|
||||
```
|
||||
|
||||
Upgrade kubelet:
|
||||
|
||||
```
|
||||
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=node --skip-tags=k8s-gen-certs,k8s-gen-tokens
|
||||
```
|
||||
|
||||
Upgrade Kubernetes master components:
|
||||
|
||||
```
|
||||
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=master
|
||||
```
|
||||
|
||||
Upgrade network plugins:
|
||||
|
||||
```
|
||||
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=network
|
||||
```
|
||||
|
||||
Upgrade all add-ons:
|
||||
|
||||
```
|
||||
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=apps
|
||||
```
|
||||
|
||||
Upgrade just helm (assuming `helm_enabled` is true):
|
||||
|
||||
```
|
||||
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=helm
|
||||
```
|
||||
|
|
|
@ -8,8 +8,8 @@
|
|||
version: "{{ item.version }}"
|
||||
state: "{{ item.state }}"
|
||||
with_items:
|
||||
- { state: "present", name: "docker", version: "3.2.1" }
|
||||
- { state: "present", name: "docker-compose", version: "1.21.0" }
|
||||
- { state: "present", name: "docker", version: "3.4.1" }
|
||||
- { state: "present", name: "docker-compose", version: "1.21.2" }
|
||||
|
||||
- name: CephFS Provisioner | Check Go version
|
||||
shell: |
|
||||
|
@ -35,19 +35,19 @@
|
|||
- name: CephFS Provisioner | Clone repo
|
||||
git:
|
||||
repo: https://github.com/kubernetes-incubator/external-storage.git
|
||||
dest: "~/go/src/github.com/kubernetes-incubator"
|
||||
version: a71a49d4
|
||||
clone: no
|
||||
dest: "~/go/src/github.com/kubernetes-incubator/external-storage"
|
||||
version: 06fddbe2
|
||||
clone: yes
|
||||
update: yes
|
||||
|
||||
- name: CephFS Provisioner | Build image
|
||||
shell: |
|
||||
cd ~/go/src/github.com/kubernetes-incubator/external-storage
|
||||
REGISTRY=quay.io/kubespray/ VERSION=a71a49d4 make ceph/cephfs
|
||||
REGISTRY=quay.io/kubespray/ VERSION=06fddbe2 make ceph/cephfs
|
||||
|
||||
- name: CephFS Provisioner | Push image
|
||||
docker_image:
|
||||
name: quay.io/kubespray/cephfs-provisioner:a71a49d4
|
||||
name: quay.io/kubespray/cephfs-provisioner:06fddbe2
|
||||
push: yes
|
||||
retries: 10
|
||||
|
||||
|
|
|
@ -131,3 +131,6 @@ bin_dir: /usr/local/bin
|
|||
|
||||
# The read-only port for the Kubelet to serve on with no authentication/authorization. Uncomment to enable.
|
||||
#kube_read_only_port: 10255
|
||||
|
||||
# Does coreos need auto upgrade, default is true
|
||||
#coreos_auto_upgrade: true
|
|
@ -19,7 +19,7 @@ kube_users_dir: "{{ kube_config_dir }}/users"
|
|||
kube_api_anonymous_auth: true
|
||||
|
||||
## Change this to use another Kubernetes version, e.g. a current beta release
|
||||
kube_version: v1.10.2
|
||||
kube_version: v1.11.2
|
||||
|
||||
# Where the binaries will be downloaded.
|
||||
# Note: ensure that you've enough disk space (about 1G)
|
||||
|
@ -67,25 +67,21 @@ kube_users:
|
|||
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
|
||||
kube_network_plugin: calico
|
||||
|
||||
# weave's network password for encryption
|
||||
# if null then no network encryption
|
||||
# you can use --extra-vars to pass the password in command line
|
||||
weave_password: EnterPasswordHere
|
||||
|
||||
# Weave uses consensus mode by default
|
||||
# Enabling seed mode allow to dynamically add or remove hosts
|
||||
# https://www.weave.works/docs/net/latest/ipam/
|
||||
weave_mode_seed: false
|
||||
|
||||
# This two variable are automatically changed by the weave's role, do not manually change these values
|
||||
# To reset values :
|
||||
# weave_seed: uninitialized
|
||||
# weave_peers: uninitialized
|
||||
weave_seed: uninitialized
|
||||
weave_peers: uninitialized
|
||||
|
||||
# Set the MTU of Weave (default 1376, Jumbo Frames: 8916)
|
||||
weave_mtu: 1376
|
||||
# Weave deployment
|
||||
# weave_password: ~
|
||||
# weave_checkpoint_disable: false
|
||||
# weave_conn_limit: 100
|
||||
# weave_hairpin_mode: true
|
||||
# weave_ipalloc_range: {{ kube_pods_subnet }}
|
||||
# weave_expect_npc: {{ enable_network_policy }}
|
||||
# weave_kube_peers: ~
|
||||
# weave_ipalloc_init: ~
|
||||
# weave_expose_ip: ~
|
||||
# weave_metrics_addr: ~
|
||||
# weave_status_addr: ~
|
||||
# weave_mtu: 1376
|
||||
# weave_no_masq_local: true
|
||||
# weave_extra_args: ~
|
||||
|
||||
# Enable kubernetes network policies
|
||||
enable_network_policy: false
|
||||
|
@ -140,12 +136,21 @@ dns_domain: "{{ cluster_name }}"
|
|||
# Path used to store Docker data
|
||||
docker_daemon_graph: "/var/lib/docker"
|
||||
|
||||
## Used to set docker daemon iptables options to true
|
||||
#docker_iptables_enabled: "true"
|
||||
|
||||
## A string of extra options to pass to the docker daemon.
|
||||
## This string should be exactly as you wish it to appear.
|
||||
## An obvious use case is allowing insecure-registry access
|
||||
## to self hosted registries like so:
|
||||
|
||||
docker_options: "--insecure-registry={{ kube_service_addresses }} --graph={{ docker_daemon_graph }} {{ docker_log_opts }}"
|
||||
docker_options: >
|
||||
--insecure-registry={{ kube_service_addresses }} --graph={{ docker_daemon_graph }} {{ docker_log_opts }}
|
||||
{% if ansible_architecture == "aarch64" and ansible_os_family == "RedHat" %}
|
||||
--add-runtime docker-runc=/usr/libexec/docker/docker-runc-current
|
||||
--default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd
|
||||
--userland-proxy-path=/usr/libexec/docker/docker-proxy-current --signature-verification=false
|
||||
{% endif %}
|
||||
docker_bin_dir: "/usr/bin"
|
||||
|
||||
## If non-empty will override default system MounFlags value.
|
||||
|
@ -164,6 +169,9 @@ helm_deployment_type: host
|
|||
# K8s image pull policy (imagePullPolicy)
|
||||
k8s_image_pull_policy: IfNotPresent
|
||||
|
||||
# audit log for kubernetes
|
||||
kubernetes_audit: false
|
||||
|
||||
# Kubernetes dashboard
|
||||
# RBAC required. see docs/getting-started.md for access details.
|
||||
dashboard_enabled: true
|
||||
|
@ -174,9 +182,6 @@ efk_enabled: false
|
|||
# Helm deployment
|
||||
helm_enabled: false
|
||||
|
||||
# Istio deployment
|
||||
istio_enabled: false
|
||||
|
||||
# Registry deployment
|
||||
registry_enabled: false
|
||||
# registry_namespace: "{{ system_namespace }}"
|
||||
|
@ -192,19 +197,21 @@ local_volume_provisioner_enabled: false
|
|||
|
||||
# CephFS provisioner deployment
|
||||
cephfs_provisioner_enabled: false
|
||||
# cephfs_provisioner_namespace: "{{ system_namespace }}"
|
||||
# cephfs_provisioner_namespace: "cephfs-provisioner"
|
||||
# cephfs_provisioner_cluster: ceph
|
||||
# cephfs_provisioner_monitors:
|
||||
# - 172.24.0.1:6789
|
||||
# - 172.24.0.2:6789
|
||||
# - 172.24.0.3:6789
|
||||
# cephfs_provisioner_monitors: "172.24.0.1:6789,172.24.0.2:6789,172.24.0.3:6789"
|
||||
# cephfs_provisioner_admin_id: admin
|
||||
# cephfs_provisioner_secret: secret
|
||||
# cephfs_provisioner_storage_class: cephfs
|
||||
# cephfs_provisioner_reclaim_policy: Delete
|
||||
# cephfs_provisioner_claim_root: /volumes
|
||||
# cephfs_provisioner_deterministic_names: true
|
||||
|
||||
# Nginx ingress controller deployment
|
||||
ingress_nginx_enabled: false
|
||||
# ingress_nginx_host_network: false
|
||||
# ingress_nginx_nodeselector:
|
||||
# node-role.kubernetes.io/master: "true"
|
||||
# ingress_nginx_namespace: "ingress-nginx"
|
||||
# ingress_nginx_insecure_port: 80
|
||||
# ingress_nginx_secure_port: 443
|
||||
|
|
|
@ -26,11 +26,6 @@
|
|||
# node5
|
||||
# node6
|
||||
|
||||
# [kube-ingress]
|
||||
# node2
|
||||
# node3
|
||||
|
||||
# [k8s-cluster:children]
|
||||
# kube-master
|
||||
# kube-node
|
||||
# kube-ingress
|
||||
|
|
|
@ -1,199 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
DOCUMENTATION = '''
|
||||
---
|
||||
module: hashivault_pki_issue
|
||||
version_added: "0.1"
|
||||
short_description: Hashicorp Vault PKI issue module
|
||||
description:
|
||||
- Module to issue PKI certs from Hashicorp Vault.
|
||||
options:
|
||||
url:
|
||||
description:
|
||||
- url for vault
|
||||
default: to environment variable VAULT_ADDR
|
||||
ca_cert:
|
||||
description:
|
||||
- "path to a PEM-encoded CA cert file to use to verify the Vault server TLS certificate"
|
||||
default: to environment variable VAULT_CACERT
|
||||
ca_path:
|
||||
description:
|
||||
- "path to a directory of PEM-encoded CA cert files to verify the Vault server TLS certificate : if ca_cert is specified, its value will take precedence"
|
||||
default: to environment variable VAULT_CAPATH
|
||||
client_cert:
|
||||
description:
|
||||
- "path to a PEM-encoded client certificate for TLS authentication to the Vault server"
|
||||
default: to environment variable VAULT_CLIENT_CERT
|
||||
client_key:
|
||||
description:
|
||||
- "path to an unencrypted PEM-encoded private key matching the client certificate"
|
||||
default: to environment variable VAULT_CLIENT_KEY
|
||||
verify:
|
||||
description:
|
||||
- "if set, do not verify presented TLS certificate before communicating with Vault server : setting this variable is not recommended except during testing"
|
||||
default: to environment variable VAULT_SKIP_VERIFY
|
||||
authtype:
|
||||
description:
|
||||
- "authentication type to use: token, userpass, github, ldap, approle"
|
||||
default: token
|
||||
token:
|
||||
description:
|
||||
- token for vault
|
||||
default: to environment variable VAULT_TOKEN
|
||||
username:
|
||||
description:
|
||||
- username to login to vault.
|
||||
default: to environment variable VAULT_USER
|
||||
password:
|
||||
description:
|
||||
- password to login to vault.
|
||||
default: to environment variable VAULT_PASSWORD
|
||||
secret:
|
||||
description:
|
||||
- secret to read.
|
||||
data:
|
||||
description:
|
||||
- Keys and values to write.
|
||||
update:
|
||||
description:
|
||||
- Update rather than overwrite.
|
||||
default: False
|
||||
min_ttl:
|
||||
description:
|
||||
- Issue new cert if existing cert has lower TTL expressed in hours or a percentage. Examples: 70800h, 50%
|
||||
force:
|
||||
description:
|
||||
- Force issue of new cert
|
||||
|
||||
'''
|
||||
EXAMPLES = '''
|
||||
---
|
||||
- hosts: localhost
|
||||
tasks:
|
||||
- hashivault_write:
|
||||
secret: giant
|
||||
data:
|
||||
foo: foe
|
||||
fie: fum
|
||||
'''
|
||||
|
||||
|
||||
def main():
|
||||
argspec = hashivault_argspec()
|
||||
argspec['secret'] = dict(required=True, type='str')
|
||||
argspec['update'] = dict(required=False, default=False, type='bool')
|
||||
argspec['data'] = dict(required=False, default={}, type='dict')
|
||||
module = hashivault_init(argspec, supports_check_mode=True)
|
||||
result = hashivault_write(module)
|
||||
if result.get('failed'):
|
||||
module.fail_json(**result)
|
||||
else:
|
||||
module.exit_json(**result)
|
||||
|
||||
|
||||
def _convert_to_seconds(original_value):
|
||||
try:
|
||||
value = str(original_value)
|
||||
seconds = 0
|
||||
if 'h' in value:
|
||||
ray = value.split('h')
|
||||
seconds = int(ray.pop(0)) * 3600
|
||||
value = ''.join(ray)
|
||||
if 'm' in value:
|
||||
ray = value.split('m')
|
||||
seconds += int(ray.pop(0)) * 60
|
||||
value = ''.join(ray)
|
||||
if value:
|
||||
ray = value.split('s')
|
||||
seconds += int(ray.pop(0))
|
||||
return seconds
|
||||
except Exception:
|
||||
pass
|
||||
return original_value
|
||||
|
||||
def hashivault_needs_refresh(old_data, min_ttl):
|
||||
print("Checking refresh")
|
||||
print_r(old_data)
|
||||
return False
|
||||
# if sorted(old_data.keys()) != sorted(new_data.keys()):
|
||||
# return True
|
||||
# for key in old_data:
|
||||
# old_value = old_data[key]
|
||||
# new_value = new_data[key]
|
||||
# if old_value == new_value:
|
||||
# continue
|
||||
# if key != 'ttl' and key != 'max_ttl':
|
||||
# return True
|
||||
# old_value = _convert_to_seconds(old_value)
|
||||
# new_value = _convert_to_seconds(new_value)
|
||||
# if old_value != new_value:
|
||||
# return True
|
||||
# return False
|
||||
#
|
||||
def hashivault_changed(old_data, new_data):
|
||||
if sorted(old_data.keys()) != sorted(new_data.keys()):
|
||||
return True
|
||||
for key in old_data:
|
||||
old_value = old_data[key]
|
||||
new_value = new_data[key]
|
||||
if old_value == new_value:
|
||||
continue
|
||||
if key != 'ttl' and key != 'max_ttl':
|
||||
return True
|
||||
old_value = _convert_to_seconds(old_value)
|
||||
new_value = _convert_to_seconds(new_value)
|
||||
if old_value != new_value:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
from ansible.module_utils.hashivault import *
|
||||
|
||||
|
||||
@hashiwrapper
|
||||
def hashivault_write(module):
|
||||
result = {"changed": False, "rc": 0}
|
||||
params = module.params
|
||||
client = hashivault_auth_client(params)
|
||||
secret = params.get('secret')
|
||||
force = params.get('force', False)
|
||||
min_ttl = params.get('min_ttl', "100%")
|
||||
returned_data = None
|
||||
|
||||
if secret.startswith('/'):
|
||||
secret = secret.lstrip('/')
|
||||
#else:
|
||||
# secret = ('secret/%s' % secret)
|
||||
data = params.get('data')
|
||||
with warnings.catch_warnings():
|
||||
warnings.simplefilter("ignore")
|
||||
changed = True
|
||||
write_data = data
|
||||
|
||||
if params.get('update') or module.check_mode:
|
||||
# Do not move this read outside of the update
|
||||
read_data = client.read(secret) or {}
|
||||
read_data = read_data.get('data', {})
|
||||
|
||||
write_data = dict(read_data)
|
||||
write_data.update(data)
|
||||
|
||||
result['write_data'] = write_data
|
||||
result['read_data'] = read_data
|
||||
changed = hashivault_changed(read_data, write_data)
|
||||
if not changed:
|
||||
changed = hashivault_needs_refresh(read_data, min_ttl)
|
||||
|
||||
if changed:
|
||||
if not module.check_mode:
|
||||
returned_data = client.write((secret), **write_data)
|
||||
|
||||
if returned_data:
|
||||
result['data'] = returned_data
|
||||
result['msg'] = "Secret %s written" % secret
|
||||
result['changed'] = changed
|
||||
return result
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
|
|
@ -5,7 +5,7 @@
|
|||
ansible_ssh_pipelining: true
|
||||
gather_facts: true
|
||||
|
||||
- hosts: etcd:k8s-cluster:vault:calico-rr
|
||||
- hosts: "{{ node | default('etcd:k8s-cluster:vault:calico-rr') }}"
|
||||
vars_prompt:
|
||||
name: "delete_nodes_confirmation"
|
||||
prompt: "Are you sure you want to delete nodes state? Type 'yes' to delete nodes."
|
||||
|
@ -22,7 +22,7 @@
|
|||
roles:
|
||||
- { role: remove-node/pre-remove, tags: pre-remove }
|
||||
|
||||
- hosts: kube-node
|
||||
- hosts: "{{ node | default('kube-node') }}"
|
||||
roles:
|
||||
- { role: kubespray-defaults }
|
||||
- { role: reset, tags: reset }
|
||||
|
|
|
@ -4,3 +4,6 @@ pip_python_coreos_modules:
|
|||
- six
|
||||
|
||||
override_system_hostname: true
|
||||
|
||||
|
||||
coreos_auto_upgrade: true
|
||||
|
|
|
@ -18,7 +18,11 @@ mv -n pypy-$PYPY_VERSION-linux64 pypy
|
|||
|
||||
## library fixup
|
||||
mkdir -p pypy/lib
|
||||
ln -snf /lib64/libncurses.so.5.9 $BINDIR/pypy/lib/libtinfo.so.5
|
||||
if [ -f /lib64/libncurses.so.5.9 ]; then
|
||||
ln -snf /lib64/libncurses.so.5.9 $BINDIR/pypy/lib/libtinfo.so.5
|
||||
elif [ -f /lib64/libncurses.so.6.1 ]; then
|
||||
ln -snf /lib64/libncurses.so.6.1 $BINDIR/pypy/lib/libtinfo.so.5
|
||||
fi
|
||||
|
||||
cat > $BINDIR/python <<EOF
|
||||
#!/bin/bash
|
||||
|
|
|
@ -62,3 +62,8 @@
|
|||
with_items: "{{pip_python_coreos_modules}}"
|
||||
environment:
|
||||
PATH: "{{ ansible_env.PATH }}:{{ bin_dir }}"
|
||||
|
||||
- name: Bootstrap | Disable auto-upgrade
|
||||
shell: "systemctl stop locksmithd.service && systemctl mask --now locksmithd.service"
|
||||
when:
|
||||
- not coreos_auto_upgrade
|
||||
|
|
|
@ -17,7 +17,7 @@ dockerproject_repo_key_info:
|
|||
dockerproject_repo_info:
|
||||
repos:
|
||||
|
||||
docker_dns_servers_strict: yes
|
||||
docker_dns_servers_strict: true
|
||||
|
||||
docker_container_storage_setup: false
|
||||
|
||||
|
@ -40,3 +40,6 @@ dockerproject_rh_repo_base_url: 'https://yum.dockerproject.org/repo/main/centos/
|
|||
dockerproject_rh_repo_gpgkey: 'https://yum.dockerproject.org/gpg'
|
||||
dockerproject_apt_repo_base_url: 'https://apt.dockerproject.org/repo'
|
||||
dockerproject_apt_repo_gpgkey: 'https://apt.dockerproject.org/gpg'
|
||||
|
||||
# Used to set docker daemon iptables options
|
||||
docker_iptables_enabled: "false"
|
||||
|
|
|
@ -9,10 +9,10 @@ docker_container_storage_setup_container_thinpool: docker-pool
|
|||
docker_container_storage_setup_data_size: 40%FREE
|
||||
docker_container_storage_setup_min_data_size: 2G
|
||||
docker_container_storage_setup_chunk_size: 512K
|
||||
docker_container_storage_setup_growpart: false
|
||||
docker_container_storage_setup_auto_extend_pool: yes
|
||||
docker_container_storage_setup_growpart: "false"
|
||||
docker_container_storage_setup_auto_extend_pool: "yes"
|
||||
docker_container_storage_setup_pool_autoextend_threshold: 60
|
||||
docker_container_storage_setup_pool_autoextend_percent: 20
|
||||
docker_container_storage_setup_device_wait_timeout: 60
|
||||
docker_container_storage_setup_wipe_signatures: false
|
||||
docker_container_storage_setup_wipe_signatures: "false"
|
||||
docker_container_storage_setup_container_root_lv_size: 40%FREE
|
||||
|
|
|
@ -7,6 +7,7 @@
|
|||
- "{{ ansible_distribution|lower }}-{{ ansible_distribution_release }}.yml"
|
||||
- "{{ ansible_distribution|lower }}-{{ ansible_distribution_major_version|lower|replace('/', '_') }}.yml"
|
||||
- "{{ ansible_distribution|lower }}.yml"
|
||||
- "{{ ansible_os_family|lower }}-{{ ansible_architecture }}.yml"
|
||||
- "{{ ansible_os_family|lower }}.yml"
|
||||
- defaults.yml
|
||||
paths:
|
||||
|
|
|
@ -6,6 +6,7 @@
|
|||
with_items:
|
||||
- docker
|
||||
- docker-engine
|
||||
- docker.io
|
||||
when:
|
||||
- ansible_os_family == 'Debian'
|
||||
- (docker_versioned_pkg[docker_version | string] | search('docker-ce'))
|
||||
|
@ -19,6 +20,12 @@
|
|||
- docker-common
|
||||
- docker-engine
|
||||
- docker-selinux
|
||||
- docker-client
|
||||
- docker-client-latest
|
||||
- docker-latest
|
||||
- docker-latest-logrotate
|
||||
- docker-logrotate
|
||||
- docker-engine-selinux
|
||||
when:
|
||||
- ansible_os_family == 'RedHat'
|
||||
- (docker_versioned_pkg[docker_version | string] | search('docker-ce'))
|
||||
|
|
|
@ -26,7 +26,7 @@
|
|||
- name: add upstream dns servers (only when dnsmasq is not used)
|
||||
set_fact:
|
||||
docker_dns_servers: "{{ docker_dns_servers + upstream_dns_servers|default([]) }}"
|
||||
when: dns_mode in ['kubedns', 'coredns', 'coreos_dual']
|
||||
when: dns_mode in ['kubedns', 'coredns', 'coredns_dual']
|
||||
|
||||
- name: add global searchdomains
|
||||
set_fact:
|
||||
|
@ -56,7 +56,7 @@
|
|||
|
||||
- name: check number of nameservers
|
||||
fail:
|
||||
msg: "Too many nameservers. You can relax this check by set docker_dns_servers_strict=no and we will only use the first 3."
|
||||
msg: "Too many nameservers. You can relax this check by set docker_dns_servers_strict=false in all.yml and we will only use the first 3."
|
||||
when: docker_dns_servers|length > 3 and docker_dns_servers_strict|bool
|
||||
|
||||
- name: rtrim number of nameservers to 3
|
||||
|
|
|
@ -1,6 +1,5 @@
|
|||
[Service]
|
||||
Environment="DOCKER_OPTS={{ docker_options | default('') }} \
|
||||
--iptables=false"
|
||||
Environment="DOCKER_OPTS={{ docker_options|default('') }} --iptables={{ docker_iptables_enabled | default('false') }}"
|
||||
{% if docker_mount_flags is defined and docker_mount_flags != "" %}
|
||||
MountFlags={{ docker_mount_flags }}
|
||||
{% endif %}
|
||||
|
|
|
@ -9,6 +9,7 @@ docker_versioned_pkg:
|
|||
'1.12': docker-engine=1.12.6-0~debian-{{ ansible_distribution_release|lower }}
|
||||
'1.13': docker-engine=1.13.1-0~debian-{{ ansible_distribution_release|lower }}
|
||||
'17.03': docker-ce=17.03.2~ce-0~debian-{{ ansible_distribution_release|lower }}
|
||||
'17.09': docker-ce=17.09.0~ce-0~debian-{{ ansible_distribution_release|lower }}
|
||||
'stable': docker-ce=17.03.2~ce-0~debian-{{ ansible_distribution_release|lower }}
|
||||
'edge': docker-ce=17.12.1~ce-0~debian-{{ ansible_distribution_release|lower }}
|
||||
|
||||
|
|
28
roles/docker/vars/redhat-aarch64.yml
Normal file
28
roles/docker/vars/redhat-aarch64.yml
Normal file
|
@ -0,0 +1,28 @@
|
|||
---
|
||||
docker_kernel_min_version: '0'
|
||||
|
||||
# overide defaults, missing 17.03 for aarch64
|
||||
docker_version: '1.13'
|
||||
|
||||
# http://mirror.centos.org/altarch/7/extras/aarch64/Packages/
|
||||
# or do 'yum --showduplicates list docker'
|
||||
docker_versioned_pkg:
|
||||
'latest': docker
|
||||
'1.12': docker-1.12.6-48.git0fdc778.el7
|
||||
'1.13': docker-1.13.1-63.git94f4240.el7
|
||||
|
||||
# https://docs.docker.com/engine/installation/linux/centos/#install-from-a-package
|
||||
# http://mirror.centos.org/altarch/7/extras/aarch64/Packages/
|
||||
|
||||
docker_package_info:
|
||||
pkg_mgr: yum
|
||||
pkgs:
|
||||
- name: "{{ docker_versioned_pkg[docker_version | string] }}"
|
||||
|
||||
docker_repo_key_info:
|
||||
pkg_key: ''
|
||||
repo_keys: []
|
||||
|
||||
docker_repo_info:
|
||||
pkg_repo: ''
|
||||
repos: []
|
|
@ -11,6 +11,7 @@ docker_versioned_pkg:
|
|||
'1.12': docker-engine-1.12.6-1.el7.centos
|
||||
'1.13': docker-engine-1.13.1-1.el7.centos
|
||||
'17.03': docker-ce-17.03.2.ce-1.el7.centos
|
||||
'17.09': docker-ce-17.09.0.ce-1.el7.centos
|
||||
'stable': docker-ce-17.03.2.ce-1.el7.centos
|
||||
'edge': docker-ce-17.12.1.ce-1.el7.centos
|
||||
|
||||
|
|
|
@ -8,6 +8,7 @@ docker_versioned_pkg:
|
|||
'1.12': docker-engine=1.12.6-0~ubuntu-{{ ansible_distribution_release|lower }}
|
||||
'1.13': docker-engine=1.13.1-0~ubuntu-{{ ansible_distribution_release|lower }}
|
||||
'17.03': docker-ce=17.03.2~ce-0~ubuntu-{{ ansible_distribution_release|lower }}
|
||||
'17.09': docker-ce=17.09.0~ce-0~ubuntu-{{ ansible_distribution_release|lower }}
|
||||
'stable': docker-ce=17.03.2~ce-0~ubuntu-{{ ansible_distribution_release|lower }}
|
||||
'edge': docker-ce=17.12.1~ce-0~ubuntu-{{ ansible_distribution_release|lower }}
|
||||
|
||||
|
|
|
@ -27,9 +27,9 @@ download_delegate: "{% if download_localhost %}localhost{% else %}{{groups['kube
|
|||
image_arch: amd64
|
||||
|
||||
# Versions
|
||||
kube_version: v1.10.2
|
||||
kube_version: v1.11.2
|
||||
kubeadm_version: "{{ kube_version }}"
|
||||
etcd_version: v3.2.16
|
||||
etcd_version: v3.2.18
|
||||
# TODO(mattymo): Move calico versions to roles/network_plugins/calico/defaults
|
||||
# after migration to container download
|
||||
calico_version: "v2.6.8"
|
||||
|
@ -39,21 +39,18 @@ calico_policy_version: "v1.0.3"
|
|||
calico_rr_version: "v0.4.2"
|
||||
flannel_version: "v0.10.0"
|
||||
flannel_cni_version: "v0.3.0"
|
||||
istio_version: "0.2.6"
|
||||
vault_version: 0.10.1
|
||||
weave_version: 2.3.0
|
||||
weave_version: "2.4.0"
|
||||
pod_infra_version: 3.0
|
||||
contiv_version: 1.1.7
|
||||
cilium_version: "v1.0.0-rc8"
|
||||
cilium_version: "v1.1.2"
|
||||
|
||||
# Download URLs
|
||||
istioctl_download_url: "https://storage.googleapis.com/istio-release/releases/{{ istio_version }}/istioctl/istioctl-linux"
|
||||
kubeadm_download_url: "https://storage.googleapis.com/kubernetes-release/release/{{ kubeadm_version }}/bin/linux/{{ image_arch }}/kubeadm"
|
||||
vault_download_url: "https://releases.hashicorp.com/vault/{{ vault_version }}/vault_{{ vault_version }}_linux_{{ image_arch }}.zip"
|
||||
|
||||
# Checksums
|
||||
istioctl_checksum: fd703063c540b8c0ab943f478c05ab257d88ae27224c746a27d0526ddbf7c370
|
||||
kubeadm_checksum: 394d7d340214c91d669186cf4f2110d8eb840ca965399b4d8b22d0545a60e377
|
||||
kubeadm_checksum: 6b17720a65b8ff46efe92a5544f149c39a221910d89939838d75581d4e6924c0
|
||||
vault_binary_checksum: 3c4d70ba71619a43229e65c67830e30e050eab7a81ac6b28325ff707e5914188
|
||||
|
||||
# Containers
|
||||
|
@ -73,22 +70,6 @@ calico_policy_image_repo: "quay.io/calico/kube-controllers"
|
|||
calico_policy_image_tag: "{{ calico_policy_version }}"
|
||||
calico_rr_image_repo: "quay.io/calico/routereflector"
|
||||
calico_rr_image_tag: "{{ calico_rr_version }}"
|
||||
istio_proxy_image_repo: docker.io/istio/proxy
|
||||
istio_proxy_image_tag: "{{ istio_version }}"
|
||||
istio_proxy_init_image_repo: docker.io/istio/proxy_init
|
||||
istio_proxy_init_image_tag: "{{ istio_version }}"
|
||||
istio_ca_image_repo: docker.io/istio/istio-ca
|
||||
istio_ca_image_tag: "{{ istio_version }}"
|
||||
istio_mixer_image_repo: docker.io/istio/mixer
|
||||
istio_mixer_image_tag: "{{ istio_version }}"
|
||||
istio_pilot_image_repo: docker.io/istio/pilot
|
||||
istio_pilot_image_tag: "{{ istio_version }}"
|
||||
istio_proxy_debug_image_repo: docker.io/istio/proxy_debug
|
||||
istio_proxy_debug_image_tag: "{{ istio_version }}"
|
||||
istio_sidecar_initializer_image_repo: docker.io/istio/sidecar_initializer
|
||||
istio_sidecar_initializer_image_tag: "{{ istio_version }}"
|
||||
istio_statsd_image_repo: prom/statsd-exporter
|
||||
istio_statsd_image_tag: latest
|
||||
hyperkube_image_repo: "gcr.io/google-containers/hyperkube-{{ image_arch }}"
|
||||
hyperkube_image_tag: "{{ kube_version }}"
|
||||
pod_infra_image_repo: "gcr.io/google_containers/pause-{{ image_arch }}"
|
||||
|
@ -120,7 +101,7 @@ dnsmasq_image_tag: "{{ dnsmasq_version }}"
|
|||
kubedns_version: 1.14.10
|
||||
kubedns_image_repo: "gcr.io/google_containers/k8s-dns-kube-dns-{{ image_arch }}"
|
||||
kubedns_image_tag: "{{ kubedns_version }}"
|
||||
coredns_version: 1.1.2
|
||||
coredns_version: 1.2.0
|
||||
coredns_image_repo: "docker.io/coredns/coredns"
|
||||
coredns_image_tag: "{{ coredns_version }}"
|
||||
dnsmasq_nanny_image_repo: "gcr.io/google_containers/k8s-dns-dnsmasq-nanny--{{ image_arch }}"
|
||||
|
@ -135,14 +116,14 @@ kubednsautoscaler_image_repo: "gcr.io/google_containers/cluster-proportional-aut
|
|||
kubednsautoscaler_image_tag: "{{ kubednsautoscaler_version }}"
|
||||
test_image_repo: busybox
|
||||
test_image_tag: latest
|
||||
elasticsearch_version: "v2.4.1"
|
||||
elasticsearch_image_repo: "gcr.io/google_containers/elasticsearch"
|
||||
elasticsearch_version: "v5.6.4"
|
||||
elasticsearch_image_repo: "k8s.gcr.io/elasticsearch"
|
||||
elasticsearch_image_tag: "{{ elasticsearch_version }}"
|
||||
fluentd_version: "1.22"
|
||||
fluentd_image_repo: "gcr.io/google_containers/fluentd-elasticsearch"
|
||||
fluentd_version: "v2.0.4"
|
||||
fluentd_image_repo: "k8s.gcr.io/fluentd-elasticsearch"
|
||||
fluentd_image_tag: "{{ fluentd_version }}"
|
||||
kibana_version: "v4.6.1"
|
||||
kibana_image_repo: "gcr.io/google_containers/kibana"
|
||||
kibana_version: "5.6.4"
|
||||
kibana_image_repo: "docker.elastic.co/kibana/kibana"
|
||||
kibana_image_tag: "{{ kibana_version }}"
|
||||
helm_version: "v2.9.1"
|
||||
helm_image_repo: "lachlanevenson/k8s-helm"
|
||||
|
@ -156,18 +137,16 @@ registry_image_tag: "2.6"
|
|||
registry_proxy_image_repo: "gcr.io/google_containers/kube-registry-proxy"
|
||||
registry_proxy_image_tag: "0.4"
|
||||
local_volume_provisioner_image_repo: "quay.io/external_storage/local-volume-provisioner"
|
||||
local_volume_provisioner_image_tag: "v2.0.0"
|
||||
cephfs_provisioner_image_repo: "quay.io/kubespray/cephfs-provisioner"
|
||||
cephfs_provisioner_image_tag: "a71a49d4"
|
||||
local_volume_provisioner_image_tag: "v2.1.0"
|
||||
cephfs_provisioner_image_repo: "quay.io/external_storage/cephfs-provisioner"
|
||||
cephfs_provisioner_image_tag: "v1.1.0-k8s1.10"
|
||||
ingress_nginx_controller_image_repo: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller"
|
||||
ingress_nginx_controller_image_tag: "0.14.0"
|
||||
ingress_nginx_controller_image_tag: "0.18.0"
|
||||
ingress_nginx_default_backend_image_repo: "gcr.io/google_containers/defaultbackend"
|
||||
ingress_nginx_default_backend_image_tag: "1.4"
|
||||
cert_manager_version: "v0.2.4"
|
||||
cert_manager_version: "v0.4.1"
|
||||
cert_manager_controller_image_repo: "quay.io/jetstack/cert-manager-controller"
|
||||
cert_manager_controller_image_tag: "{{ cert_manager_version }}"
|
||||
cert_manager_ingress_shim_image_repo: "quay.io/jetstack/cert-manager-ingress-shim"
|
||||
cert_manager_ingress_shim_image_tag: "{{ cert_manager_version }}"
|
||||
|
||||
downloads:
|
||||
netcheck_server:
|
||||
|
@ -207,83 +186,6 @@ downloads:
|
|||
mode: "0755"
|
||||
groups:
|
||||
- k8s-cluster
|
||||
istioctl:
|
||||
enabled: "{{ istio_enabled }}"
|
||||
file: true
|
||||
version: "{{ istio_version }}"
|
||||
dest: "istio/istioctl"
|
||||
sha256: "{{ istioctl_checksum }}"
|
||||
source_url: "{{ istioctl_download_url }}"
|
||||
url: "{{ istioctl_download_url }}"
|
||||
unarchive: false
|
||||
owner: "root"
|
||||
mode: "0755"
|
||||
groups:
|
||||
- kube-master
|
||||
istio_proxy:
|
||||
enabled: "{{ istio_enabled }}"
|
||||
container: true
|
||||
repo: "{{ istio_proxy_image_repo }}"
|
||||
tag: "{{ istio_proxy_image_tag }}"
|
||||
sha256: "{{ istio_proxy_digest_checksum|default(None) }}"
|
||||
groups:
|
||||
- kube-node
|
||||
istio_proxy_init:
|
||||
enabled: "{{ istio_enabled }}"
|
||||
container: true
|
||||
repo: "{{ istio_proxy_init_image_repo }}"
|
||||
tag: "{{ istio_proxy_init_image_tag }}"
|
||||
sha256: "{{ istio_proxy_init_digest_checksum|default(None) }}"
|
||||
groups:
|
||||
- kube-node
|
||||
istio_ca:
|
||||
enabled: "{{ istio_enabled }}"
|
||||
container: true
|
||||
repo: "{{ istio_ca_image_repo }}"
|
||||
tag: "{{ istio_ca_image_tag }}"
|
||||
sha256: "{{ istio_ca_digest_checksum|default(None) }}"
|
||||
groups:
|
||||
- kube-node
|
||||
istio_mixer:
|
||||
enabled: "{{ istio_enabled }}"
|
||||
container: true
|
||||
repo: "{{ istio_mixer_image_repo }}"
|
||||
tag: "{{ istio_mixer_image_tag }}"
|
||||
sha256: "{{ istio_mixer_digest_checksum|default(None) }}"
|
||||
groups:
|
||||
- kube-node
|
||||
istio_pilot:
|
||||
enabled: "{{ istio_enabled }}"
|
||||
container: true
|
||||
repo: "{{ istio_pilot_image_repo }}"
|
||||
tag: "{{ istio_pilot_image_tag }}"
|
||||
sha256: "{{ istio_pilot_digest_checksum|default(None) }}"
|
||||
groups:
|
||||
- kube-node
|
||||
istio_proxy_debug:
|
||||
enabled: "{{ istio_enabled }}"
|
||||
container: true
|
||||
repo: "{{ istio_proxy_debug_image_repo }}"
|
||||
tag: "{{ istio_proxy_debug_image_tag }}"
|
||||
sha256: "{{ istio_proxy_debug_digest_checksum|default(None) }}"
|
||||
groups:
|
||||
- kube-node
|
||||
istio_sidecar_initializer:
|
||||
enabled: "{{ istio_enabled }}"
|
||||
container: true
|
||||
repo: "{{ istio_sidecar_initializer_image_repo }}"
|
||||
tag: "{{ istio_sidecar_initializer_image_tag }}"
|
||||
sha256: "{{ istio_sidecar_initializer_digest_checksum|default(None) }}"
|
||||
groups:
|
||||
- kube-node
|
||||
istio_statsd:
|
||||
enabled: "{{ istio_enabled }}"
|
||||
container: true
|
||||
repo: "{{ istio_statsd_image_repo }}"
|
||||
tag: "{{ istio_statsd_image_tag }}"
|
||||
sha256: "{{ istio_statsd_digest_checksum|default(None) }}"
|
||||
groups:
|
||||
- kube-node
|
||||
hyperkube:
|
||||
enabled: true
|
||||
container: true
|
||||
|
@ -569,7 +471,7 @@ downloads:
|
|||
tag: "{{ ingress_nginx_controller_image_tag }}"
|
||||
sha256: "{{ ingress_nginx_controller_digest_checksum|default(None) }}"
|
||||
groups:
|
||||
- kube-ingress
|
||||
- kube-node
|
||||
ingress_nginx_default_backend:
|
||||
enabled: "{{ ingress_nginx_enabled }}"
|
||||
container: true
|
||||
|
@ -577,7 +479,7 @@ downloads:
|
|||
tag: "{{ ingress_nginx_default_backend_image_tag }}"
|
||||
sha256: "{{ ingress_nginx_default_backend_digest_checksum|default(None) }}"
|
||||
groups:
|
||||
- kube-ingress
|
||||
- kube-node
|
||||
cert_manager_controller:
|
||||
enabled: "{{ cert_manager_enabled }}"
|
||||
container: true
|
||||
|
@ -586,14 +488,6 @@ downloads:
|
|||
sha256: "{{ cert_manager_controller_digest_checksum|default(None) }}"
|
||||
groups:
|
||||
- kube-node
|
||||
cert_manager_ingress_shim:
|
||||
enabled: "{{ cert_manager_enabled }}"
|
||||
container: true
|
||||
repo: "{{ cert_manager_ingress_shim_image_repo }}"
|
||||
tag: "{{ cert_manager_ingress_shim_image_tag }}"
|
||||
sha256: "{{ cert_manager_ingress_shim_digest_checksum|default(None) }}"
|
||||
groups:
|
||||
- kube-node
|
||||
|
||||
download_defaults:
|
||||
container: false
|
||||
|
|
|
@ -20,6 +20,6 @@
|
|||
when:
|
||||
- not skip_downloads|default(false)
|
||||
- item.value.enabled
|
||||
- item.value.container
|
||||
- "{{ item.value.container | default(False) }}"
|
||||
- download_run_once
|
||||
- group_names | intersect(download.groups) | length
|
||||
|
|
|
@ -9,7 +9,7 @@
|
|||
|
||||
- name: Register docker images info
|
||||
raw: >-
|
||||
{{ docker_bin_dir }}/docker images -q | xargs {{ docker_bin_dir }}/docker inspect -f "{{ '{{' }} (index .RepoTags 0) {{ '}}' }},{{ '{{' }} (index .RepoDigests 0) {{ '}}' }}" | tr '\n' ','
|
||||
{{ docker_bin_dir }}/docker images -q | xargs {{ docker_bin_dir }}/docker inspect -f "{{ '{{' }} if .RepoTags {{ '}}' }}{{ '{{' }} (index .RepoTags 0) {{ '}}' }}{{ '{{' }} end {{ '}}' }}{{ '{{' }} if .RepoDigests {{ '}}' }},{{ '{{' }} (index .RepoDigests 0) {{ '}}' }}{{ '{{' }} end {{ '}}' }}" | tr '\n' ','
|
||||
no_log: true
|
||||
register: docker_images
|
||||
failed_when: false
|
||||
|
|
|
@ -3,6 +3,9 @@
|
|||
etcd_cluster_setup: true
|
||||
etcd_events_cluster_setup: false
|
||||
|
||||
# Set to true to separate k8s events to a different etcd cluster
|
||||
etcd_events_cluster_enabled: false
|
||||
|
||||
etcd_backup_prefix: "/var/backups"
|
||||
etcd_data_dir: "/var/lib/etcd"
|
||||
etcd_events_data_dir: "/var/lib/etcd-events"
|
||||
|
|
|
@ -95,4 +95,9 @@ if [ -n "$HOSTS" ]; then
|
|||
fi
|
||||
|
||||
# Install certs
|
||||
if [ -e "$SSLDIR/ca-key.pem" ]; then
|
||||
# No pass existing CA
|
||||
rm -f ca.pem ca-key.pem
|
||||
fi
|
||||
|
||||
mv *.pem ${SSLDIR}/
|
||||
|
|
|
@ -62,5 +62,3 @@
|
|||
with_items: "{{ etcd_node_certs_needed|d([]) }}"
|
||||
when: inventory_hostname in etcd_node_cert_hosts
|
||||
notify: set etcd_secret_changed
|
||||
|
||||
- fail:
|
||||
|
|
|
@ -19,11 +19,17 @@
|
|||
register: "etcd_client_cert_serial_result"
|
||||
changed_when: false
|
||||
when: inventory_hostname in groups['k8s-cluster']|union(groups['etcd'])|union(groups['calico-rr']|default([]))|unique|sort
|
||||
tags:
|
||||
- master
|
||||
- network
|
||||
|
||||
- name: Set etcd_client_cert_serial
|
||||
set_fact:
|
||||
etcd_client_cert_serial: "{{ etcd_client_cert_serial_result.stdout }}"
|
||||
when: inventory_hostname in groups['k8s-cluster']|union(groups['etcd'])|union(groups['calico-rr']|default([]))|unique|sort
|
||||
tags:
|
||||
- master
|
||||
- network
|
||||
|
||||
- include_tasks: "install_{{ etcd_deployment_type }}.yml"
|
||||
when: is_etcd_master
|
||||
|
|
|
@ -8,13 +8,15 @@
|
|||
"member-" + inventory_hostname + ".pem"
|
||||
] }}
|
||||
|
||||
#- include_tasks: ../../vault/tasks/shared/sync_file.yml
|
||||
# vars:
|
||||
# sync_file: "{{ item }}"
|
||||
# sync_file_dir: "{{ etcd_cert_dir }}"
|
||||
# sync_file_hosts: [ "{{ inventory_hostname }}" ]
|
||||
# sync_file_is_cert: true
|
||||
# with_items: "{{ etcd_master_cert_list|d([]) }}"
|
||||
- include_tasks: ../../vault/tasks/shared/sync_file.yml
|
||||
vars:
|
||||
sync_file: "{{ item }}"
|
||||
sync_file_dir: "{{ etcd_cert_dir }}"
|
||||
sync_file_hosts: [ "{{ inventory_hostname }}" ]
|
||||
sync_file_owner: kube
|
||||
sync_file_group: root
|
||||
sync_file_is_cert: true
|
||||
with_items: "{{ etcd_master_cert_list|d([]) }}"
|
||||
|
||||
- name: sync_etcd_certs | Set facts for etcd sync_file results
|
||||
set_fact:
|
||||
|
@ -22,16 +24,16 @@
|
|||
with_items: "{{ sync_file_results|d([]) }}"
|
||||
when: item.no_srcs|bool
|
||||
|
||||
#- name: sync_etcd_certs | Unset sync_file_results after etcd certs sync
|
||||
# set_fact:
|
||||
# sync_file_results: []
|
||||
#
|
||||
#- include_tasks: ../../vault/tasks/shared/sync_file.yml
|
||||
# vars:
|
||||
# sync_file: ca.pem
|
||||
# sync_file_dir: "{{ etcd_cert_dir }}"
|
||||
# sync_file_hosts: [ "{{ inventory_hostname }}" ]
|
||||
#
|
||||
#- name: sync_etcd_certs | Unset sync_file_results after ca.pem sync
|
||||
# set_fact:
|
||||
# sync_file_results: []
|
||||
- name: sync_etcd_certs | Unset sync_file_results after etcd certs sync
|
||||
set_fact:
|
||||
sync_file_results: []
|
||||
|
||||
- include_tasks: ../../vault/tasks/shared/sync_file.yml
|
||||
vars:
|
||||
sync_file: ca.pem
|
||||
sync_file_dir: "{{ etcd_cert_dir }}"
|
||||
sync_file_hosts: [ "{{ inventory_hostname }}" ]
|
||||
|
||||
- name: sync_etcd_certs | Unset sync_file_results after ca.pem sync
|
||||
set_fact:
|
||||
sync_file_results: []
|
||||
|
|
|
@ -4,30 +4,30 @@
|
|||
set_fact:
|
||||
etcd_node_cert_list: "{{ etcd_node_cert_list|default([]) + ['node-' + inventory_hostname + '.pem'] }}"
|
||||
|
||||
#- include_tasks: ../../vault/tasks/shared/sync_file.yml
|
||||
# vars:
|
||||
# sync_file: "{{ item }}"
|
||||
# sync_file_dir: "{{ etcd_cert_dir }}"
|
||||
# sync_file_hosts: [ "{{ inventory_hostname }}" ]
|
||||
# sync_file_is_cert: true
|
||||
# with_items: "{{ etcd_node_cert_list|d([]) }}"
|
||||
#
|
||||
- include_tasks: ../../vault/tasks/shared/sync_file.yml
|
||||
vars:
|
||||
sync_file: "{{ item }}"
|
||||
sync_file_dir: "{{ etcd_cert_dir }}"
|
||||
sync_file_hosts: [ "{{ inventory_hostname }}" ]
|
||||
sync_file_is_cert: true
|
||||
with_items: "{{ etcd_node_cert_list|d([]) }}"
|
||||
|
||||
- name: sync_etcd_node_certs | Set facts for etcd sync_file results
|
||||
set_fact:
|
||||
etcd_node_certs_needed: "{{ etcd_node_certs_needed|default([]) + [item.path] }}"
|
||||
with_items: "{{ sync_file_results|d([]) }}"
|
||||
when: item.no_srcs|bool
|
||||
|
||||
#- name: sync_etcd_node_certs | Unset sync_file_results after etcd node certs
|
||||
# set_fact:
|
||||
# sync_file_results: []
|
||||
#
|
||||
#- include_tasks: ../../vault/tasks/shared/sync_file.yml
|
||||
# vars:
|
||||
# sync_file: ca.pem
|
||||
# sync_file_dir: "{{ etcd_cert_dir }}"
|
||||
# sync_file_hosts: "{{ groups['etcd'] }}"
|
||||
#
|
||||
#- name: sync_etcd_node_certs | Unset sync_file_results after ca.pem
|
||||
# set_fact:
|
||||
# sync_file_results: []
|
||||
- name: sync_etcd_node_certs | Unset sync_file_results after etcd node certs
|
||||
set_fact:
|
||||
sync_file_results: []
|
||||
|
||||
- include_tasks: ../../vault/tasks/shared/sync_file.yml
|
||||
vars:
|
||||
sync_file: ca.pem
|
||||
sync_file_dir: "{{ etcd_cert_dir }}"
|
||||
sync_file_hosts: "{{ groups['etcd'] }}"
|
||||
|
||||
- name: sync_etcd_node_certs | Unset sync_file_results after ca.pem
|
||||
set_fact:
|
||||
sync_file_results: []
|
||||
|
|
31
roles/etcd/templates/etcd-events-rkt.service.j2
Normal file
31
roles/etcd/templates/etcd-events-rkt.service.j2
Normal file
|
@ -0,0 +1,31 @@
|
|||
[Unit]
|
||||
Description=etcd events rkt wrapper
|
||||
Documentation=https://github.com/coreos/etcd
|
||||
Wants=network.target
|
||||
|
||||
[Service]
|
||||
Restart=on-failure
|
||||
RestartSec=10s
|
||||
TimeoutStartSec=0
|
||||
LimitNOFILE=40000
|
||||
|
||||
ExecStart=/usr/bin/rkt run \
|
||||
--uuid-file-save=/var/run/etcd-events.uuid \
|
||||
--volume hosts,kind=host,source=/etc/hosts,readOnly=true \
|
||||
--mount volume=hosts,target=/etc/hosts \
|
||||
--volume=etc-ssl-certs,kind=host,source=/etc/ssl/certs,readOnly=true \
|
||||
--mount=volume=etc-ssl-certs,target=/etc/ssl/certs \
|
||||
--volume=etcd-cert-dir,kind=host,source={{ etcd_cert_dir }},readOnly=true \
|
||||
--mount=volume=etcd-cert-dir,target={{ etcd_cert_dir }} \
|
||||
--volume=etcd-data-dir,kind=host,source={{ etcd_events_data_dir }},readOnly=false \
|
||||
--mount=volume=etcd-data-dir,target={{ etcd_events_data_dir }} \
|
||||
--set-env-file=/etc/etcd-events.env \
|
||||
--stage1-from-dir=stage1-fly.aci \
|
||||
{{ etcd_image_repo }}:{{ etcd_image_tag }} \
|
||||
--name={{ etcd_member_name | default("etcd-events") }}
|
||||
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/etcd-events.uuid
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/etcd-events.uuid
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
|
@ -60,6 +60,9 @@ dashboard_certs_secret_name: kubernetes-dashboard-certs
|
|||
dashboard_tls_key_file: dashboard.key
|
||||
dashboard_tls_cert_file: dashboard.crt
|
||||
|
||||
# Override dashboard default settings
|
||||
dashboard_token_ttl: 900
|
||||
|
||||
# SSL
|
||||
etcd_cert_dir: "/etc/ssl/etcd/ssl"
|
||||
canal_cert_dir: "/etc/canal/certs"
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
- rbac_enabled or item.type not in rbac_resources
|
||||
tags:
|
||||
- dnsmasq
|
||||
- kubedns
|
||||
|
||||
# see https://github.com/kubernetes/kubernetes/issues/45084, only needed for "old" kube-dns
|
||||
- name: Kubernetes Apps | Patch system:kube-dns ClusterRole
|
||||
|
@ -39,3 +40,4 @@
|
|||
- rbac_enabled and kubedns_version|version_compare("1.11.0", "<", strict=True)
|
||||
tags:
|
||||
- dnsmasq
|
||||
- kubedns
|
||||
|
|
|
@ -17,6 +17,9 @@
|
|||
- inventory_hostname == groups['kube-master'][0]
|
||||
tags:
|
||||
- upgrade
|
||||
- dnsmasq
|
||||
- coredns
|
||||
- kubedns
|
||||
|
||||
- name: Kubernetes Apps | CoreDNS
|
||||
import_tasks: "tasks/coredns.yml"
|
||||
|
@ -56,6 +59,8 @@
|
|||
delay: 5
|
||||
tags:
|
||||
- dnsmasq
|
||||
- coredns
|
||||
- kubedns
|
||||
|
||||
- name: Kubernetes Apps | Netchecker
|
||||
import_tasks: tasks/netchecker.yml
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
- name: Kubernetes Apps | Check if netchecker-server manifest already exists
|
||||
stat:
|
||||
path: "{{ kube_config_dir }}/netchecker-server-deployment.yml.j2"
|
||||
path: "{{ kube_config_dir }}/netchecker-server-deployment.yml"
|
||||
register: netchecker_server_manifest
|
||||
tags:
|
||||
- facts
|
||||
|
@ -22,16 +22,16 @@
|
|||
|
||||
- name: Kubernetes Apps | Lay Down Netchecker Template
|
||||
template:
|
||||
src: "{{item.file}}"
|
||||
src: "{{item.file}}.j2"
|
||||
dest: "{{kube_config_dir}}/{{item.file}}"
|
||||
with_items:
|
||||
- {file: netchecker-agent-ds.yml.j2, type: ds, name: netchecker-agent}
|
||||
- {file: netchecker-agent-hostnet-ds.yml.j2, type: ds, name: netchecker-agent-hostnet}
|
||||
- {file: netchecker-server-sa.yml.j2, type: sa, name: netchecker-server}
|
||||
- {file: netchecker-server-clusterrole.yml.j2, type: clusterrole, name: netchecker-server}
|
||||
- {file: netchecker-server-clusterrolebinding.yml.j2, type: clusterrolebinding, name: netchecker-server}
|
||||
- {file: netchecker-server-deployment.yml.j2, type: deployment, name: netchecker-server}
|
||||
- {file: netchecker-server-svc.yml.j2, type: svc, name: netchecker-service}
|
||||
- {file: netchecker-agent-ds.yml, type: ds, name: netchecker-agent}
|
||||
- {file: netchecker-agent-hostnet-ds.yml, type: ds, name: netchecker-agent-hostnet}
|
||||
- {file: netchecker-server-sa.yml, type: sa, name: netchecker-server}
|
||||
- {file: netchecker-server-clusterrole.yml, type: clusterrole, name: netchecker-server}
|
||||
- {file: netchecker-server-clusterrolebinding.yml, type: clusterrolebinding, name: netchecker-server}
|
||||
- {file: netchecker-server-deployment.yml, type: deployment, name: netchecker-server}
|
||||
- {file: netchecker-server-svc.yml, type: svc, name: netchecker-service}
|
||||
register: manifests
|
||||
when:
|
||||
- inventory_hostname == groups['kube-master'][0]
|
||||
|
|
|
@ -11,7 +11,7 @@ data:
|
|||
.:53 {
|
||||
errors
|
||||
health
|
||||
kubernetes {{ cluster_name }} in-addr.arpa ip6.arpa {
|
||||
kubernetes {{ dns_domain }} in-addr.arpa ip6.arpa {
|
||||
pods insecure
|
||||
upstream /etc/resolv.conf
|
||||
fallthrough in-addr.arpa ip6.arpa
|
||||
|
|
|
@ -34,6 +34,22 @@ spec:
|
|||
effect: NoSchedule
|
||||
- key: "CriticalAddonsOnly"
|
||||
operator: "Exists"
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- topologyKey: "kubernetes.io/hostname"
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
k8s-app: coredns{{ coredns_ordinal_suffix | default('') }}
|
||||
nodeAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 100
|
||||
preference:
|
||||
matchExpressions:
|
||||
- key: node-role.kubernetes.io/master
|
||||
operator: In
|
||||
values:
|
||||
- "true"
|
||||
containers:
|
||||
- name: coredns
|
||||
image: "{{ coredns_image_repo }}:{{ coredns_image_tag }}"
|
||||
|
|
|
@ -166,6 +166,7 @@ spec:
|
|||
# If not specified, Dashboard will attempt to auto discover the API server and connect
|
||||
# to it. Uncomment only if the default does not work.
|
||||
# - --apiserver-host=http://my-address:port
|
||||
- --token-ttl={{ dashboard_token_ttl }}
|
||||
volumeMounts:
|
||||
- name: kubernetes-dashboard-certs
|
||||
mountPath: /certs
|
||||
|
|
|
@ -30,7 +30,24 @@ spec:
|
|||
spec:
|
||||
tolerations:
|
||||
- effect: NoSchedule
|
||||
operator: Exists
|
||||
operator: Equal
|
||||
key: node-role.kubernetes.io/master
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- topologyKey: "kubernetes.io/hostname"
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
k8s-app: kubedns-autoscaler
|
||||
nodeAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 100
|
||||
preference:
|
||||
matchExpressions:
|
||||
- key: node-role.kubernetes.io/master
|
||||
operator: In
|
||||
values:
|
||||
- "true"
|
||||
containers:
|
||||
- name: autoscaler
|
||||
image: "{{ kubednsautoscaler_image_repo }}:{{ kubednsautoscaler_image_tag }}"
|
||||
|
|
|
@ -30,8 +30,25 @@ spec:
|
|||
tolerations:
|
||||
- key: "CriticalAddonsOnly"
|
||||
operator: "Exists"
|
||||
- effect: NoSchedule
|
||||
operator: Exists
|
||||
- effect: "NoSchedule"
|
||||
operator: "Equal"
|
||||
key: "node-role.kubernetes.io/master"
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- topologyKey: "kubernetes.io/hostname"
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
k8s-app: kube-dns
|
||||
nodeAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 100
|
||||
preference:
|
||||
matchExpressions:
|
||||
- key: node-role.kubernetes.io/master
|
||||
operator: In
|
||||
values:
|
||||
- "true"
|
||||
volumes:
|
||||
- name: kube-dns-config
|
||||
configMap:
|
||||
|
|
|
@ -1,9 +1,12 @@
|
|||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: efk
|
||||
namespace: kube-system
|
||||
labels:
|
||||
kubernetes.io/cluster-service: "true"
|
||||
addonmanager.kubernetes.io/mode: Reconcile
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: efk
|
||||
|
|
|
@ -6,3 +6,4 @@ metadata:
|
|||
namespace: kube-system
|
||||
labels:
|
||||
kubernetes.io/cluster-service: "true"
|
||||
addonmanager.kubernetes.io/mode: Reconcile
|
||||
|
|
|
@ -1,15 +1,17 @@
|
|||
---
|
||||
# https://raw.githubusercontent.com/kubernetes/kubernetes/v1.5.2/cluster/addons/fluentd-elasticsearch/es-controller.yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
# https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.2/cluster/addons/fluentd-elasticsearch/es-statefulset.yaml
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: elasticsearch-logging-v1
|
||||
name: elasticsearch-logging
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: elasticsearch-logging
|
||||
version: "{{ elasticsearch_image_tag }}"
|
||||
kubernetes.io/cluster-service: "true"
|
||||
addonmanager.kubernetes.io/mode: Reconcile
|
||||
spec:
|
||||
serviceName: elasticsearch-logging
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
|
@ -53,4 +55,10 @@ spec:
|
|||
{% if rbac_enabled %}
|
||||
serviceAccountName: efk
|
||||
{% endif %}
|
||||
initContainers:
|
||||
- image: alpine:3.6
|
||||
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
|
||||
name: elasticsearch-logging-init
|
||||
securityContext:
|
||||
privileged: true
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
fluentd_cpu_limit: 0m
|
||||
fluentd_mem_limit: 200Mi
|
||||
fluentd_mem_limit: 500Mi
|
||||
fluentd_cpu_requests: 100m
|
||||
fluentd_mem_requests: 200Mi
|
||||
fluentd_config_dir: /etc/kubernetes/fluentd
|
||||
fluentd_config_file: fluentd.conf
|
||||
fluentd_config_dir: /etc/fluent/config.d
|
||||
# fluentd_config_file: fluentd.conf
|
||||
|
|
|
@ -1,10 +1,19 @@
|
|||
---
|
||||
# https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.10/cluster/addons/fluentd-elasticsearch/fluentd-es-configmap.yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: fluentd-config
|
||||
namespace: "kube-system"
|
||||
labels:
|
||||
addonmanager.kubernetes.io/mode: Reconcile
|
||||
data:
|
||||
{{ fluentd_config_file }}: |
|
||||
system.conf: |-
|
||||
<system>
|
||||
root_dir /tmp/fluentd-buffers/
|
||||
</system>
|
||||
|
||||
containers.input.conf: |-
|
||||
# This configuration file for Fluentd / td-agent is used
|
||||
# to watch changes to Docker log files. The kubelet creates symlinks that
|
||||
# capture the pod name, namespace, container name & Docker container ID
|
||||
|
@ -18,7 +27,6 @@ data:
|
|||
# See https://github.com/uken/fluent-plugin-elasticsearch &
|
||||
# https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter for
|
||||
# more information about the plugins.
|
||||
# Maintainer: Jimmi Dyson <jimmidyson@gmail.com>
|
||||
#
|
||||
# Example
|
||||
# =======
|
||||
|
@ -99,63 +107,87 @@ data:
|
|||
# This makes it easier for users to search for logs by pod name or by
|
||||
# the name of the Kubernetes container regardless of how many times the
|
||||
# Kubernetes pod has been restarted (resulting in a several Docker container IDs).
|
||||
#
|
||||
# TODO: Propagate the labels associated with a container along with its logs
|
||||
# so users can query logs using labels as well as or instead of the pod name
|
||||
# and container name. This is simply done via configuration of the Kubernetes
|
||||
# fluentd plugin but requires secrets to be enabled in the fluent pod. This is a
|
||||
# problem yet to be solved as secrets are not usable in static pods which the fluentd
|
||||
# pod must be until a per-node controller is available in Kubernetes.
|
||||
# Prevent fluentd from handling records containing its own logs. Otherwise
|
||||
# it can lead to an infinite loop, when error in sending one message generates
|
||||
# another message which also fails to be sent and so on.
|
||||
<match fluent.**>
|
||||
type null
|
||||
</match>
|
||||
# Example:
|
||||
|
||||
# Json Log Example:
|
||||
# {"log":"[info:2016-02-16T16:04:05.930-08:00] Some log text here\n","stream":"stdout","time":"2016-02-17T00:04:05.931087621Z"}
|
||||
# CRI Log Example:
|
||||
# 2016-02-17T00:04:05.931087621Z stdout F [info:2016-02-16T16:04:05.930-08:00] Some log text here
|
||||
<source>
|
||||
type tail
|
||||
@id fluentd-containers.log
|
||||
@type tail
|
||||
path /var/log/containers/*.log
|
||||
pos_file /var/log/es-containers.log.pos
|
||||
time_format %Y-%m-%dT%H:%M:%S.%NZ
|
||||
tag kubernetes.*
|
||||
format json
|
||||
tag raw.kubernetes.*
|
||||
read_from_head true
|
||||
<parse>
|
||||
@type multi_format
|
||||
<pattern>
|
||||
format json
|
||||
time_key time
|
||||
time_format %Y-%m-%dT%H:%M:%S.%NZ
|
||||
</pattern>
|
||||
<pattern>
|
||||
format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
|
||||
time_format %Y-%m-%dT%H:%M:%S.%N%:z
|
||||
</pattern>
|
||||
</parse>
|
||||
</source>
|
||||
|
||||
# Detect exceptions in the log output and forward them as one log entry.
|
||||
<match raw.kubernetes.**>
|
||||
@id raw.kubernetes
|
||||
@type detect_exceptions
|
||||
remove_tag_prefix raw
|
||||
message log
|
||||
stream stream
|
||||
multiline_flush_interval 5
|
||||
max_bytes 500000
|
||||
max_lines 1000
|
||||
</match>
|
||||
|
||||
system.input.conf: |-
|
||||
# Example:
|
||||
# 2015-12-21 23:17:22,066 [salt.state ][INFO ] Completed state [net.ipv4.ip_forward] at time 23:17:22.066081
|
||||
<source>
|
||||
type tail
|
||||
@id minion
|
||||
@type tail
|
||||
format /^(?<time>[^ ]* [^ ,]*)[^\[]*\[[^\]]*\]\[(?<severity>[^ \]]*) *\] (?<message>.*)$/
|
||||
time_format %Y-%m-%d %H:%M:%S
|
||||
path /var/log/salt/minion
|
||||
pos_file /var/log/es-salt.pos
|
||||
pos_file /var/log/salt.pos
|
||||
tag salt
|
||||
</source>
|
||||
|
||||
# Example:
|
||||
# Dec 21 23:17:22 gke-foo-1-1-4b5cbd14-node-4eoj startupscript: Finished running startup script /var/run/google.startup.script
|
||||
<source>
|
||||
type tail
|
||||
@id startupscript.log
|
||||
@type tail
|
||||
format syslog
|
||||
path /var/log/startupscript.log
|
||||
pos_file /var/log/es-startupscript.log.pos
|
||||
tag startupscript
|
||||
</source>
|
||||
|
||||
# Examples:
|
||||
# time="2016-02-04T06:51:03.053580605Z" level=info msg="GET /containers/json"
|
||||
# time="2016-02-04T07:53:57.505612354Z" level=error msg="HTTP Error" err="No such image: -f" statusCode=404
|
||||
# TODO(random-liu): Remove this after cri container runtime rolls out.
|
||||
<source>
|
||||
type tail
|
||||
@id docker.log
|
||||
@type tail
|
||||
format /^time="(?<time>[^)]*)" level=(?<severity>[^ ]*) msg="(?<message>[^"]*)"( err="(?<error>[^"]*)")?( statusCode=($<status_code>\d+))?/
|
||||
path /var/log/docker.log
|
||||
pos_file /var/log/es-docker.log.pos
|
||||
tag docker
|
||||
</source>
|
||||
|
||||
# Example:
|
||||
# 2016/02/04 06:52:38 filePurge: successfully removed file /var/etcd/data/member/wal/00000000000006d0-00000000010a23d1.wal
|
||||
<source>
|
||||
type tail
|
||||
@id etcd.log
|
||||
@type tail
|
||||
# Not parsing this, because it doesn't have anything particularly useful to
|
||||
# parse out of it (like severities).
|
||||
format none
|
||||
|
@ -163,13 +195,16 @@ data:
|
|||
pos_file /var/log/es-etcd.log.pos
|
||||
tag etcd
|
||||
</source>
|
||||
|
||||
# Multi-line parsing is required for all the kube logs because very large log
|
||||
# statements, such as those that include entire object bodies, get split into
|
||||
# multiple lines by glog.
|
||||
|
||||
# Example:
|
||||
# I0204 07:32:30.020537 3368 server.go:1048] POST /stats/container/: (13.972191ms) 200 [[Go-http-client/1.1] 10.244.1.3:40537]
|
||||
<source>
|
||||
type tail
|
||||
@id kubelet.log
|
||||
@type tail
|
||||
format multiline
|
||||
multiline_flush_interval 5s
|
||||
format_firstline /^\w\d{4}/
|
||||
|
@ -179,10 +214,12 @@ data:
|
|||
pos_file /var/log/es-kubelet.log.pos
|
||||
tag kubelet
|
||||
</source>
|
||||
|
||||
# Example:
|
||||
# I1118 21:26:53.975789 6 proxier.go:1096] Port "nodePort for kube-system/default-http-backend:http" (:31429/tcp) was open before and is still needed
|
||||
<source>
|
||||
type tail
|
||||
@id kube-proxy.log
|
||||
@type tail
|
||||
format multiline
|
||||
multiline_flush_interval 5s
|
||||
format_firstline /^\w\d{4}/
|
||||
|
@ -192,10 +229,12 @@ data:
|
|||
pos_file /var/log/es-kube-proxy.log.pos
|
||||
tag kube-proxy
|
||||
</source>
|
||||
|
||||
# Example:
|
||||
# I0204 07:00:19.604280 5 handlers.go:131] GET /api/v1/nodes: (1.624207ms) 200 [[kube-controller-manager/v1.1.3 (linux/amd64) kubernetes/6a81b50] 127.0.0.1:38266]
|
||||
<source>
|
||||
type tail
|
||||
@id kube-apiserver.log
|
||||
@type tail
|
||||
format multiline
|
||||
multiline_flush_interval 5s
|
||||
format_firstline /^\w\d{4}/
|
||||
|
@ -205,10 +244,12 @@ data:
|
|||
pos_file /var/log/es-kube-apiserver.log.pos
|
||||
tag kube-apiserver
|
||||
</source>
|
||||
|
||||
# Example:
|
||||
# I0204 06:55:31.872680 5 servicecontroller.go:277] LB already exists and doesn't need update for service kube-system/kube-ui
|
||||
<source>
|
||||
type tail
|
||||
@id kube-controller-manager.log
|
||||
@type tail
|
||||
format multiline
|
||||
multiline_flush_interval 5s
|
||||
format_firstline /^\w\d{4}/
|
||||
|
@ -218,10 +259,12 @@ data:
|
|||
pos_file /var/log/es-kube-controller-manager.log.pos
|
||||
tag kube-controller-manager
|
||||
</source>
|
||||
|
||||
# Example:
|
||||
# W0204 06:49:18.239674 7 reflector.go:245] pkg/scheduler/factory/factory.go:193: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [2578313/2577886]) [2579312]
|
||||
<source>
|
||||
type tail
|
||||
@id kube-scheduler.log
|
||||
@type tail
|
||||
format multiline
|
||||
multiline_flush_interval 5s
|
||||
format_firstline /^\w\d{4}/
|
||||
|
@ -231,10 +274,12 @@ data:
|
|||
pos_file /var/log/es-kube-scheduler.log.pos
|
||||
tag kube-scheduler
|
||||
</source>
|
||||
|
||||
# Example:
|
||||
# I1104 10:36:20.242766 5 rescheduler.go:73] Running Rescheduler
|
||||
<source>
|
||||
type tail
|
||||
@id rescheduler.log
|
||||
@type tail
|
||||
format multiline
|
||||
multiline_flush_interval 5s
|
||||
format_firstline /^\w\d{4}/
|
||||
|
@ -244,10 +289,12 @@ data:
|
|||
pos_file /var/log/es-rescheduler.log.pos
|
||||
tag rescheduler
|
||||
</source>
|
||||
|
||||
# Example:
|
||||
# I0603 15:31:05.793605 6 cluster_manager.go:230] Reading config from path /etc/gce.conf
|
||||
<source>
|
||||
type tail
|
||||
@id glbc.log
|
||||
@type tail
|
||||
format multiline
|
||||
multiline_flush_interval 5s
|
||||
format_firstline /^\w\d{4}/
|
||||
|
@ -257,10 +304,12 @@ data:
|
|||
pos_file /var/log/es-glbc.log.pos
|
||||
tag glbc
|
||||
</source>
|
||||
|
||||
# Example:
|
||||
# I0603 15:31:05.793605 6 cluster_manager.go:230] Reading config from path /etc/gce.conf
|
||||
<source>
|
||||
type tail
|
||||
@id cluster-autoscaler.log
|
||||
@type tail
|
||||
format multiline
|
||||
multiline_flush_interval 5s
|
||||
format_firstline /^\w\d{4}/
|
||||
|
@ -270,59 +319,123 @@ data:
|
|||
pos_file /var/log/es-cluster-autoscaler.log.pos
|
||||
tag cluster-autoscaler
|
||||
</source>
|
||||
|
||||
# Logs from systemd-journal for interesting services.
|
||||
# TODO(random-liu): Remove this after cri container runtime rolls out.
|
||||
<source>
|
||||
@id journald-docker
|
||||
@type systemd
|
||||
filters [{ "_SYSTEMD_UNIT": "docker.service" }]
|
||||
<storage>
|
||||
@type local
|
||||
persistent true
|
||||
</storage>
|
||||
read_from_head true
|
||||
tag docker
|
||||
</source>
|
||||
|
||||
# <source>
|
||||
# @id journald-container-runtime
|
||||
# @type systemd
|
||||
# filters [{ "_SYSTEMD_UNIT": "{% raw %}{{ container_runtime }} {% endraw %}.service" }]
|
||||
# <storage>
|
||||
# @type local
|
||||
# persistent true
|
||||
# </storage>
|
||||
# read_from_head true
|
||||
# tag container-runtime
|
||||
# </source>
|
||||
|
||||
<source>
|
||||
@id journald-kubelet
|
||||
@type systemd
|
||||
filters [{ "_SYSTEMD_UNIT": "kubelet.service" }]
|
||||
<storage>
|
||||
@type local
|
||||
persistent true
|
||||
</storage>
|
||||
read_from_head true
|
||||
tag kubelet
|
||||
</source>
|
||||
|
||||
<source>
|
||||
@id journald-node-problem-detector
|
||||
@type systemd
|
||||
filters [{ "_SYSTEMD_UNIT": "node-problem-detector.service" }]
|
||||
<storage>
|
||||
@type local
|
||||
persistent true
|
||||
</storage>
|
||||
read_from_head true
|
||||
tag node-problem-detector
|
||||
</source>
|
||||
|
||||
forward.input.conf: |-
|
||||
# Takes the messages sent over TCP
|
||||
<source>
|
||||
@type forward
|
||||
</source>
|
||||
|
||||
monitoring.conf: |-
|
||||
# Prometheus Exporter Plugin
|
||||
# input plugin that exports metrics
|
||||
<source>
|
||||
@type prometheus
|
||||
</source>
|
||||
|
||||
<source>
|
||||
@type monitor_agent
|
||||
</source>
|
||||
|
||||
# input plugin that collects metrics from MonitorAgent
|
||||
<source>
|
||||
@type prometheus_monitor
|
||||
<labels>
|
||||
host ${hostname}
|
||||
</labels>
|
||||
</source>
|
||||
|
||||
# input plugin that collects metrics for output plugin
|
||||
<source>
|
||||
@type prometheus_output_monitor
|
||||
<labels>
|
||||
host ${hostname}
|
||||
</labels>
|
||||
</source>
|
||||
|
||||
# input plugin that collects metrics for in_tail plugin
|
||||
<source>
|
||||
@type prometheus_tail_monitor
|
||||
<labels>
|
||||
host ${hostname}
|
||||
</labels>
|
||||
</source>
|
||||
|
||||
output.conf: |-
|
||||
# Enriches records with Kubernetes metadata
|
||||
<filter kubernetes.**>
|
||||
type kubernetes_metadata
|
||||
@type kubernetes_metadata
|
||||
</filter>
|
||||
## Prometheus Exporter Plugin
|
||||
## input plugin that exports metrics
|
||||
#<source>
|
||||
# type prometheus
|
||||
#</source>
|
||||
#<source>
|
||||
# type monitor_agent
|
||||
#</source>
|
||||
#<source>
|
||||
# type forward
|
||||
#</source>
|
||||
## input plugin that collects metrics from MonitorAgent
|
||||
#<source>
|
||||
# @type prometheus_monitor
|
||||
# <labels>
|
||||
# host ${hostname}
|
||||
# </labels>
|
||||
#</source>
|
||||
## input plugin that collects metrics for output plugin
|
||||
#<source>
|
||||
# @type prometheus_output_monitor
|
||||
# <labels>
|
||||
# host ${hostname}
|
||||
# </labels>
|
||||
#</source>
|
||||
## input plugin that collects metrics for in_tail plugin
|
||||
#<source>
|
||||
# @type prometheus_tail_monitor
|
||||
# <labels>
|
||||
# host ${hostname}
|
||||
# </labels>
|
||||
#</source>
|
||||
|
||||
<match **>
|
||||
type elasticsearch
|
||||
user "#{ENV['FLUENT_ELASTICSEARCH_USER']}"
|
||||
password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}"
|
||||
log_level info
|
||||
include_tag_key true
|
||||
host elasticsearch-logging
|
||||
port 9200
|
||||
logstash_format true
|
||||
# Set the chunk limit the same as for fluentd-gcp.
|
||||
buffer_chunk_limit 2M
|
||||
# Cap buffer memory usage to 2MiB/chunk * 32 chunks = 64 MiB
|
||||
buffer_queue_limit 32
|
||||
flush_interval 5s
|
||||
# Never wait longer than 5 minutes between retries.
|
||||
max_retry_wait 30
|
||||
# Disable the limit on the number of retries (retry forever).
|
||||
disable_retry_limit
|
||||
# Use multiple threads for processing.
|
||||
num_threads 8
|
||||
@id elasticsearch
|
||||
@type elasticsearch
|
||||
@log_level info
|
||||
include_tag_key true
|
||||
host elasticsearch-logging
|
||||
port 9200
|
||||
logstash_format true
|
||||
<buffer>
|
||||
@type file
|
||||
path /var/log/fluentd-buffers/kubernetes.system.buffer
|
||||
flush_mode interval
|
||||
retry_type exponential_backoff
|
||||
flush_thread_count 2
|
||||
flush_interval 5s
|
||||
retry_forever
|
||||
retry_max_interval 30
|
||||
chunk_limit_size 2M
|
||||
queue_limit_length 8
|
||||
overflow_action block
|
||||
</buffer>
|
||||
</match>
|
|
@ -1,32 +1,42 @@
|
|||
---
|
||||
# https://raw.githubusercontent.com/kubernetes/kubernetes/v1.5.2/cluster/addons/fluentd-elasticsearch/es-controller.yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
# https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.2/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: "fluentd-es-v{{ fluentd_version }}"
|
||||
name: "fluentd-es-{{ fluentd_version }}"
|
||||
namespace: "kube-system"
|
||||
labels:
|
||||
k8s-app: fluentd-es
|
||||
version: "{{ fluentd_version }}"
|
||||
kubernetes.io/cluster-service: "true"
|
||||
version: "v{{ fluentd_version }}"
|
||||
addonmanager.kubernetes.io/mode: Reconcile
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: fluentd-es
|
||||
version: "{{ fluentd_version }}"
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: fluentd-es
|
||||
kubernetes.io/cluster-service: "true"
|
||||
version: "v{{ fluentd_version }}"
|
||||
version: "{{ fluentd_version }}"
|
||||
# This annotation ensures that fluentd does not get evicted if the node
|
||||
# supports critical pod annotation based priority scheme.
|
||||
# Note that this does not guarantee admission on the nodes (#40573).
|
||||
annotations:
|
||||
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||
spec:
|
||||
tolerations:
|
||||
- effect: NoSchedule
|
||||
operator: Exists
|
||||
priorityClassName: system-node-critical
|
||||
{% if rbac_enabled %}
|
||||
serviceAccountName: efk
|
||||
{% endif %}
|
||||
containers:
|
||||
- name: fluentd-es
|
||||
image: "{{ fluentd_image_repo }}:{{ fluentd_image_tag }}"
|
||||
command:
|
||||
- '/bin/sh'
|
||||
- '-c'
|
||||
- '/usr/sbin/td-agent -c {{ fluentd_config_dir }}/{{ fluentd_config_file}} 2>&1 >> /var/log/fluentd.log'
|
||||
env:
|
||||
- name: FLUENTD_ARGS
|
||||
value: "--no-supervisor -q"
|
||||
resources:
|
||||
limits:
|
||||
{% if fluentd_cpu_limit is defined and fluentd_cpu_limit != "0m" %}
|
||||
|
@ -34,27 +44,24 @@ spec:
|
|||
{% endif %}
|
||||
memory: {{ fluentd_mem_limit }}
|
||||
requests:
|
||||
cpu: {{ fluentd_cpu_requests }}
|
||||
cpu: {{ fluentd_cpu_requests }}
|
||||
memory: {{ fluentd_mem_requests }}
|
||||
volumeMounts:
|
||||
- name: varlog
|
||||
mountPath: /var/log
|
||||
- name: dockercontainers
|
||||
- name: varlibdockercontainers
|
||||
mountPath: "{{ docker_daemon_graph }}/containers"
|
||||
readOnly: true
|
||||
- name: config
|
||||
- name: config-volume
|
||||
mountPath: "{{ fluentd_config_dir }}"
|
||||
terminationGracePeriodSeconds: 30
|
||||
volumes:
|
||||
- name: varlog
|
||||
hostPath:
|
||||
path: /var/log
|
||||
- name: dockercontainers
|
||||
- name: varlibdockercontainers
|
||||
hostPath:
|
||||
path: {{ docker_daemon_graph }}/containers
|
||||
- name: config
|
||||
- name: config-volume
|
||||
configMap:
|
||||
name: fluentd-config
|
||||
{% if rbac_enabled %}
|
||||
serviceAccountName: efk
|
||||
{% endif %}
|
||||
|
|
|
@ -4,3 +4,4 @@ kibana_mem_limit: 0M
|
|||
kibana_cpu_requests: 100m
|
||||
kibana_mem_requests: 0M
|
||||
kibana_service_port: 5601
|
||||
kibana_base_url: "/api/v1/namespaces/kube-system/services/kibana-logging/proxy"
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
# https://raw.githubusercontent.com/kubernetes/kubernetes/v1.5.2/cluster/addons/fluentd-kibana/kibana-controller.yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
# https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.10/cluster/addons/fluentd-elasticsearch/kibana-deployment.yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: kibana-logging
|
||||
|
@ -36,10 +36,12 @@ spec:
|
|||
env:
|
||||
- name: "ELASTICSEARCH_URL"
|
||||
value: "http://elasticsearch-logging:{{ elasticsearch_service_port }}"
|
||||
{% if kibana_base_url is defined and kibana_base_url != "" %}
|
||||
- name: "KIBANA_BASE_URL"
|
||||
- name: "SERVER_BASEPATH"
|
||||
value: "{{ kibana_base_url }}"
|
||||
{% endif %}
|
||||
- name: XPACK_MONITORING_ENABLED
|
||||
value: "false"
|
||||
- name: XPACK_SECURITY_ENABLED
|
||||
value: "false"
|
||||
ports:
|
||||
- containerPort: 5601
|
||||
name: ui
|
||||
|
|
|
@ -1,7 +1,10 @@
|
|||
---
|
||||
cephfs_provisioner_namespace: "kube-system"
|
||||
cephfs_provisioner_namespace: "cephfs-provisioner"
|
||||
cephfs_provisioner_cluster: ceph
|
||||
cephfs_provisioner_monitors: []
|
||||
cephfs_provisioner_monitors: ~
|
||||
cephfs_provisioner_admin_id: admin
|
||||
cephfs_provisioner_secret: secret
|
||||
cephfs_provisioner_storage_class: cephfs
|
||||
cephfs_provisioner_reclaim_policy: Delete
|
||||
cephfs_provisioner_claim_root: /volumes
|
||||
cephfs_provisioner_deterministic_names: true
|
||||
|
|
|
@ -1,5 +1,32 @@
|
|||
---
|
||||
|
||||
- name: CephFS Provisioner | Remove legacy addon dir and manifests
|
||||
file:
|
||||
path: "{{ kube_config_dir }}/addons/cephfs_provisioner"
|
||||
state: absent
|
||||
when:
|
||||
- inventory_hostname == groups['kube-master'][0]
|
||||
tags:
|
||||
- upgrade
|
||||
|
||||
- name: CephFS Provisioner | Remove legacy namespace
|
||||
shell: |
|
||||
{{ bin_dir }}/kubectl delete namespace {{ cephfs_provisioner_namespace }}
|
||||
ignore_errors: yes
|
||||
when:
|
||||
- inventory_hostname == groups['kube-master'][0]
|
||||
tags:
|
||||
- upgrade
|
||||
|
||||
- name: CephFS Provisioner | Remove legacy storageclass
|
||||
shell: |
|
||||
{{ bin_dir }}/kubectl delete storageclass {{ cephfs_provisioner_storage_class }}
|
||||
ignore_errors: yes
|
||||
when:
|
||||
- inventory_hostname == groups['kube-master'][0]
|
||||
tags:
|
||||
- upgrade
|
||||
|
||||
- name: CephFS Provisioner | Create addon dir
|
||||
file:
|
||||
path: "{{ kube_config_dir }}/addons/cephfs_provisioner"
|
||||
|
@ -7,22 +34,24 @@
|
|||
owner: root
|
||||
group: root
|
||||
mode: 0755
|
||||
when:
|
||||
- inventory_hostname == groups['kube-master'][0]
|
||||
|
||||
- name: CephFS Provisioner | Create manifests
|
||||
template:
|
||||
src: "{{ item.file }}.j2"
|
||||
dest: "{{ kube_config_dir }}/addons/cephfs_provisioner/{{ item.file }}"
|
||||
with_items:
|
||||
- { name: cephfs-provisioner-ns, file: cephfs-provisioner-ns.yml, type: ns }
|
||||
- { name: cephfs-provisioner-sa, file: cephfs-provisioner-sa.yml, type: sa }
|
||||
- { name: cephfs-provisioner-role, file: cephfs-provisioner-role.yml, type: role }
|
||||
- { name: cephfs-provisioner-rolebinding, file: cephfs-provisioner-rolebinding.yml, type: rolebinding }
|
||||
- { name: cephfs-provisioner-clusterrole, file: cephfs-provisioner-clusterrole.yml, type: clusterrole }
|
||||
- { name: cephfs-provisioner-clusterrolebinding, file: cephfs-provisioner-clusterrolebinding.yml, type: clusterrolebinding }
|
||||
- { name: cephfs-provisioner-rs, file: cephfs-provisioner-rs.yml, type: rs }
|
||||
- { name: cephfs-provisioner-secret, file: cephfs-provisioner-secret.yml, type: secret }
|
||||
- { name: cephfs-provisioner-sc, file: cephfs-provisioner-sc.yml, type: sc }
|
||||
register: cephfs_manifests
|
||||
- { name: 00-namespace, file: 00-namespace.yml, type: ns }
|
||||
- { name: secret-cephfs-provisioner, file: secret-cephfs-provisioner.yml, type: secret }
|
||||
- { name: sa-cephfs-provisioner, file: sa-cephfs-provisioner.yml, type: sa }
|
||||
- { name: clusterrole-cephfs-provisioner, file: clusterrole-cephfs-provisioner.yml, type: clusterrole }
|
||||
- { name: clusterrolebinding-cephfs-provisioner, file: clusterrolebinding-cephfs-provisioner.yml, type: clusterrolebinding }
|
||||
- { name: role-cephfs-provisioner, file: role-cephfs-provisioner.yml, type: role }
|
||||
- { name: rolebinding-cephfs-provisioner, file: rolebinding-cephfs-provisioner.yml, type: rolebinding }
|
||||
- { name: deploy-cephfs-provisioner, file: deploy-cephfs-provisioner.yml, type: rs }
|
||||
- { name: sc-cephfs-provisioner, file: sc-cephfs-provisioner.yml, type: sc }
|
||||
register: cephfs_provisioner_manifests
|
||||
when: inventory_hostname == groups['kube-master'][0]
|
||||
|
||||
- name: CephFS Provisioner | Apply manifests
|
||||
|
@ -33,5 +62,5 @@
|
|||
resource: "{{ item.item.type }}"
|
||||
filename: "{{ kube_config_dir }}/addons/cephfs_provisioner/{{ item.item.file }}"
|
||||
state: "latest"
|
||||
with_items: "{{ cephfs_manifests.results }}"
|
||||
with_items: "{{ cephfs_provisioner_manifests.results }}"
|
||||
when: inventory_hostname == groups['kube-master'][0]
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
apiVersion: apps/v1
|
||||
kind: ReplicaSet
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: cephfs-provisioner-v{{ cephfs_provisioner_image_tag }}
|
||||
namespace: {{ cephfs_provisioner_namespace }}
|
|
@ -4,9 +4,12 @@ kind: StorageClass
|
|||
metadata:
|
||||
name: {{ cephfs_provisioner_storage_class }}
|
||||
provisioner: ceph.com/cephfs
|
||||
reclaimPolicy: {{ cephfs_provisioner_reclaim_policy }}
|
||||
parameters:
|
||||
cluster: {{ cephfs_provisioner_cluster }}
|
||||
monitors: {{ cephfs_provisioner_monitors | join(',') }}
|
||||
monitors: {{ cephfs_provisioner_monitors }}
|
||||
adminId: {{ cephfs_provisioner_admin_id }}
|
||||
adminSecretName: cephfs-provisioner-{{ cephfs_provisioner_admin_id }}-secret
|
||||
adminSecretName: cephfs-provisioner
|
||||
adminSecretNamespace: {{ cephfs_provisioner_namespace }}
|
||||
claimRoot: {{ cephfs_provisioner_claim_root }}
|
||||
deterministicNames: "{{ cephfs_provisioner_deterministic_names | bool | lower }}"
|
|
@ -2,7 +2,7 @@
|
|||
kind: Secret
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: cephfs-provisioner-{{ cephfs_provisioner_admin_id }}-secret
|
||||
name: cephfs-provisioner
|
||||
namespace: {{ cephfs_provisioner_namespace }}
|
||||
type: Opaque
|
||||
data:
|
|
@ -46,18 +46,20 @@ to limit the quota of persistent volumes.
|
|||
|
||||
### Simple directories
|
||||
|
||||
``` bash
|
||||
for vol in vol6 vol7 vol8; do
|
||||
mkdir /mnt/disks/$vol
|
||||
done
|
||||
```
|
||||
|
||||
This is also acceptable in a development environment, but there is no capacity
|
||||
In a development environment using `mount --bind` works also, but there is no capacity
|
||||
management.
|
||||
|
||||
### Block volumeMode PVs
|
||||
|
||||
Create a symbolic link under discovery directory to the block device on the node. To use
|
||||
raw block devices in pods BlockVolume feature gate must be enabled.
|
||||
|
||||
Usage notes
|
||||
-----------
|
||||
|
||||
Beta PV.NodeAffinity field is used by default. If running against an older K8s
|
||||
version, the useAlphaAPI flag must be set in the configMap.
|
||||
|
||||
The volume provisioner cannot calculate volume sizes correctly, so you should
|
||||
delete the daemonset pod on the relevant host after creating volumes. The pod
|
||||
will be recreated and read the size correctly.
|
||||
|
|
|
@ -19,6 +19,9 @@ spec:
|
|||
version: {{ local_volume_provisioner_image_tag }}
|
||||
spec:
|
||||
serviceAccountName: local-volume-provisioner
|
||||
tolerations:
|
||||
- effect: NoSchedule
|
||||
operator: Exists
|
||||
containers:
|
||||
- name: provisioner
|
||||
image: {{ local_volume_provisioner_image_repo }}:{{ local_volume_provisioner_image_tag }}
|
||||
|
@ -30,12 +33,17 @@ spec:
|
|||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
- name: MY_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
volumeMounts:
|
||||
- name: local-volume-provisioner
|
||||
mountPath: /etc/provisioner/config
|
||||
readOnly: true
|
||||
- name: local-volume-provisioner-hostpath-mnt-disks
|
||||
mountPath: {{ local_volume_provisioner_mount_dir }}
|
||||
mountPropagation: "HostToContainer"
|
||||
volumes:
|
||||
- name: local-volume-provisioner
|
||||
configMap:
|
||||
|
|
|
@ -18,3 +18,6 @@ helm_skip_refresh: false
|
|||
|
||||
# Override values for the Tiller Deployment manifest.
|
||||
# tiller_override: "key1=val1,key2=val2"
|
||||
|
||||
# Limit the maximum number of revisions saved per release. Use 0 for no limit.
|
||||
# tiller_max_history: 0
|
||||
|
|
|
@ -34,6 +34,7 @@
|
|||
{% if rbac_enabled %} --service-account=tiller{% endif %}
|
||||
{% if tiller_node_selectors is defined %} --node-selectors {{ tiller_node_selectors }}{% endif %}
|
||||
{% if tiller_override is defined %} --override {{ tiller_override }}{% endif %}
|
||||
{% if tiller_max_history is defined %} --history-max={{ tiller_max_history }}{% endif %}
|
||||
when: (helm_container is defined and helm_container.changed) or (helm_task_result is defined and helm_task_result.changed)
|
||||
|
||||
- name: Helm | Set up bash completion
|
||||
|
|
|
@ -1,6 +1,2 @@
|
|||
---
|
||||
cert_manager_namespace: "cert-manager"
|
||||
cert_manager_cpu_requests: 10m
|
||||
cert_manager_cpu_limits: 30m
|
||||
cert_manager_memory_requests: 32Mi
|
||||
cert_manager_memory_limits: 200Mi
|
||||
|
|
|
@ -1,5 +1,23 @@
|
|||
---
|
||||
|
||||
- name: Cert Manager | Remove legacy addon dir and manifests
|
||||
file:
|
||||
path: "{{ kube_config_dir }}/addons/cert_manager"
|
||||
state: absent
|
||||
when:
|
||||
- inventory_hostname == groups['kube-master'][0]
|
||||
tags:
|
||||
- upgrade
|
||||
|
||||
- name: Cert Manager | Remove legacy namespace
|
||||
shell: |
|
||||
{{ bin_dir }}/kubectl delete namespace {{ cert_manager_namespace }}
|
||||
ignore_errors: yes
|
||||
when:
|
||||
- inventory_hostname == groups['kube-master'][0]
|
||||
tags:
|
||||
- upgrade
|
||||
|
||||
- name: Cert Manager | Create addon dir
|
||||
file:
|
||||
path: "{{ kube_config_dir }}/addons/cert_manager"
|
||||
|
@ -7,20 +25,22 @@
|
|||
owner: root
|
||||
group: root
|
||||
mode: 0755
|
||||
when:
|
||||
- inventory_hostname == groups['kube-master'][0]
|
||||
|
||||
- name: Cert Manager | Create manifests
|
||||
template:
|
||||
src: "{{ item.file }}.j2"
|
||||
dest: "{{ kube_config_dir }}/addons/cert_manager/{{ item.file }}"
|
||||
with_items:
|
||||
- { name: cert-manager-ns, file: cert-manager-ns.yml, type: ns }
|
||||
- { name: cert-manager-sa, file: cert-manager-sa.yml, type: sa }
|
||||
- { name: cert-manager-clusterrole, file: cert-manager-clusterrole.yml, type: clusterrole }
|
||||
- { name: cert-manager-clusterrolebinding, file: cert-manager-clusterrolebinding.yml, type: clusterrolebinding }
|
||||
- { name: cert-manager-issuer-crd, file: cert-manager-issuer-crd.yml, type: crd }
|
||||
- { name: cert-manager-clusterissuer-crd, file: cert-manager-clusterissuer-crd.yml, type: crd }
|
||||
- { name: cert-manager-certificate-crd, file: cert-manager-certificate-crd.yml, type: crd }
|
||||
- { name: cert-manager-deploy, file: cert-manager-deploy.yml, type: deploy }
|
||||
- { name: 00-namespace, file: 00-namespace.yml, type: ns }
|
||||
- { name: sa-cert-manager, file: sa-cert-manager.yml, type: sa }
|
||||
- { name: crd-certificate, file: crd-certificate.yml, type: crd }
|
||||
- { name: crd-clusterissuer, file: crd-clusterissuer.yml, type: crd }
|
||||
- { name: crd-issuer, file: crd-issuer.yml, type: crd }
|
||||
- { name: clusterrole-cert-manager, file: clusterrole-cert-manager.yml, type: clusterrole }
|
||||
- { name: clusterrolebinding-cert-manager, file: clusterrolebinding-cert-manager.yml, type: clusterrolebinding }
|
||||
- { name: deploy-cert-manager, file: deploy-cert-manager.yml, type: deploy }
|
||||
register: cert_manager_manifests
|
||||
when:
|
||||
- inventory_hostname == groups['kube-master'][0]
|
||||
|
|
|
@ -5,7 +5,7 @@ metadata:
|
|||
name: cert-manager
|
||||
labels:
|
||||
app: cert-manager
|
||||
chart: cert-manager-0.2.8
|
||||
chart: cert-manager-v0.4.1
|
||||
release: cert-manager
|
||||
heritage: Tiller
|
||||
rules:
|
|
@ -5,7 +5,7 @@ metadata:
|
|||
name: cert-manager
|
||||
labels:
|
||||
app: cert-manager
|
||||
chart: cert-manager-0.2.8
|
||||
chart: cert-manager-v0.4.1
|
||||
release: cert-manager
|
||||
heritage: Tiller
|
||||
roleRef:
|
|
@ -5,7 +5,7 @@ metadata:
|
|||
name: certificates.certmanager.k8s.io
|
||||
labels:
|
||||
app: cert-manager
|
||||
chart: cert-manager-0.2.8
|
||||
chart: cert-manager-v0.4.1
|
||||
release: cert-manager
|
||||
heritage: Tiller
|
||||
spec:
|
|
@ -5,7 +5,7 @@ metadata:
|
|||
name: clusterissuers.certmanager.k8s.io
|
||||
labels:
|
||||
app: cert-manager
|
||||
chart: cert-manager-0.2.8
|
||||
chart: cert-manager-v0.4.1
|
||||
release: cert-manager
|
||||
heritage: Tiller
|
||||
spec:
|
|
@ -5,7 +5,7 @@ metadata:
|
|||
name: issuers.certmanager.k8s.io
|
||||
labels:
|
||||
app: cert-manager
|
||||
chart: cert-manager-0.2.8
|
||||
chart: cert-manager-v0.4.1
|
||||
release: cert-manager
|
||||
heritage: Tiller
|
||||
spec:
|
|
@ -6,15 +6,19 @@ metadata:
|
|||
namespace: {{ cert_manager_namespace }}
|
||||
labels:
|
||||
app: cert-manager
|
||||
chart: cert-manager-0.2.8
|
||||
chart: cert-manager-v0.4.1
|
||||
release: cert-manager
|
||||
heritage: Tiller
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: cert-manager
|
||||
release: cert-manager
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: cert-manager
|
||||
app: cert-manager
|
||||
release: cert-manager
|
||||
annotations:
|
||||
spec:
|
||||
|
@ -25,6 +29,7 @@ spec:
|
|||
imagePullPolicy: {{ k8s_image_pull_policy }}
|
||||
args:
|
||||
- --cluster-resource-namespace=$(POD_NAMESPACE)
|
||||
- --leader-election-namespace=$(POD_NAMESPACE)
|
||||
env:
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
|
@ -32,20 +37,5 @@ spec:
|
|||
fieldPath: metadata.namespace
|
||||
resources:
|
||||
requests:
|
||||
cpu: {{ cert_manager_cpu_requests }}
|
||||
memory: {{ cert_manager_memory_requests }}
|
||||
limits:
|
||||
cpu: {{ cert_manager_cpu_limits }}
|
||||
memory: {{ cert_manager_memory_limits }}
|
||||
|
||||
- name: ingress-shim
|
||||
image: {{ cert_manager_ingress_shim_image_repo }}:{{ cert_manager_ingress_shim_image_tag }}
|
||||
imagePullPolicy: {{ k8s_image_pull_policy }}
|
||||
resources:
|
||||
requests:
|
||||
cpu: {{ cert_manager_cpu_requests }}
|
||||
memory: {{ cert_manager_memory_requests }}
|
||||
limits:
|
||||
cpu: {{ cert_manager_cpu_limits }}
|
||||
memory: {{ cert_manager_memory_limits }}
|
||||
|
||||
cpu: 10m
|
||||
memory: 32Mi
|
|
@ -6,6 +6,6 @@ metadata:
|
|||
namespace: {{ cert_manager_namespace }}
|
||||
labels:
|
||||
app: cert-manager
|
||||
chart: cert-manager-0.2.8
|
||||
chart: cert-manager-v0.4.1
|
||||
release: cert-manager
|
||||
heritage: Tiller
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue