Terraform - Remove the need for region specific reference data (#1962)
* Dynamically retrieve aws_bastion_ami latest reference by querying AWS rather than hard coded * Dynamically retrieve the list of availability_zones instead of needing to have them hard coded * Limit availability zones to first 2, using slice extrapolation function * Replace the need for hardcoded variable "aws_cluster_ami" by the data provided by Terraform * Move ami choosing to vars, so people don't need to edit create infrastructure if they want another vendor image (as suggested by @atoms) * Make name of the data block agnostic of distribution, given there are more than one distribution supported * Add documentation about other distros being supported and what to change in which location to make these changes
This commit is contained in:
parent
a0225507a0
commit
ca8a9c600a
4 changed files with 87 additions and 31 deletions
|
@ -24,7 +24,7 @@ export AWS_DEFAULT_REGION="zzz"
|
||||||
```
|
```
|
||||||
- Rename `contrib/terraform/aws/terraform.tfvars.example` to `terraform.tfvars`
|
- Rename `contrib/terraform/aws/terraform.tfvars.example` to `terraform.tfvars`
|
||||||
|
|
||||||
- Update `contrib/terraform/aws/terraform.tfvars` with your data
|
- Update `contrib/terraform/aws/terraform.tfvars` with your data. By default, the Terraform scripts use CoreOS as base image. If you want to change this behaviour, see note "Using other distrib than CoreOs" below.
|
||||||
- Allocate a new AWS Elastic IP. Use this for your `loadbalancer_apiserver_address` value (below)
|
- Allocate a new AWS Elastic IP. Use this for your `loadbalancer_apiserver_address` value (below)
|
||||||
- Create an AWS EC2 SSH Key
|
- Create an AWS EC2 SSH Key
|
||||||
- Run with `terraform apply --var-file="credentials.tfvars"` or `terraform apply` depending if you exported your AWS credentials
|
- Run with `terraform apply --var-file="credentials.tfvars"` or `terraform apply` depending if you exported your AWS credentials
|
||||||
|
@ -36,18 +36,72 @@ terraform apply -var-file=credentials.tfvars -var 'loadbalancer_apiserver_addres
|
||||||
|
|
||||||
- Terraform automatically creates an Ansible Inventory file called `hosts` with the created infrastructure in the directory `inventory`
|
- Terraform automatically creates an Ansible Inventory file called `hosts` with the created infrastructure in the directory `inventory`
|
||||||
|
|
||||||
- Ansible will automatically generate an ssh config file for your bastion hosts. To connect to hosts with ssh using bastion host use generated ssh-bastion.conf.
|
- Ansible will automatically generate an ssh config file for your bastion hosts. To connect to hosts with ssh using bastion host use generated ssh-bastion.conf.
|
||||||
Ansible automatically detects bastion and changes ssh_args
|
Ansible automatically detects bastion and changes ssh_args
|
||||||
```commandline
|
```commandline
|
||||||
ssh -F ./ssh-bastion.conf user@$ip
|
ssh -F ./ssh-bastion.conf user@$ip
|
||||||
```
|
```
|
||||||
|
|
||||||
- Once the infrastructure is created, you can run the kubespray playbooks and supply inventory/hosts with the `-i` flag.
|
- Once the infrastructure is created, you can run the kubespray playbooks and supply inventory/hosts with the `-i` flag.
|
||||||
|
|
||||||
Example (this one assumes you are using CoreOS)
|
Example (this one assumes you are using CoreOS)
|
||||||
```commandline
|
```commandline
|
||||||
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_ssh_user=core -e bootstrap_os=coreos -b --become-user=root --flush-cache
|
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_ssh_user=core -e bootstrap_os=coreos -b --become-user=root --flush-cache
|
||||||
```
|
```
|
||||||
|
***Using other distrib than CoreOs***
|
||||||
|
If you want to use another distribution than CoreOS, you can modify the search filters of the 'data "aws_ami" "distro"' in variables.tf.
|
||||||
|
|
||||||
|
For example, to use:
|
||||||
|
- Debian Jessie, replace 'data "aws_ami" "distro"' in variables.tf with
|
||||||
|
data "aws_ami" "distro" {
|
||||||
|
most_recent = true
|
||||||
|
|
||||||
|
filter {
|
||||||
|
name = "name"
|
||||||
|
values = ["debian-jessie-amd64-hvm-*"]
|
||||||
|
}
|
||||||
|
|
||||||
|
filter {
|
||||||
|
name = "virtualization-type"
|
||||||
|
values = ["hvm"]
|
||||||
|
}
|
||||||
|
|
||||||
|
owners = ["379101102735"]
|
||||||
|
}
|
||||||
|
|
||||||
|
- Ubuntu 16.04, replace 'data "aws_ami" "distro"' in variables.tf with
|
||||||
|
data "aws_ami" "distro" {
|
||||||
|
most_recent = true
|
||||||
|
|
||||||
|
filter {
|
||||||
|
name = "name"
|
||||||
|
values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-*"]
|
||||||
|
}
|
||||||
|
|
||||||
|
filter {
|
||||||
|
name = "virtualization-type"
|
||||||
|
values = ["hvm"]
|
||||||
|
}
|
||||||
|
|
||||||
|
owners = ["099720109477"]
|
||||||
|
}
|
||||||
|
|
||||||
|
- Centos 7, replace 'data "aws_ami" "distro"' in variables.tf with
|
||||||
|
data "aws_ami" "distro" {
|
||||||
|
most_recent = true
|
||||||
|
|
||||||
|
filter {
|
||||||
|
name = "name"
|
||||||
|
values = ["dcos-centos7-*"]
|
||||||
|
}
|
||||||
|
|
||||||
|
filter {
|
||||||
|
name = "virtualization-type"
|
||||||
|
values = ["hvm"]
|
||||||
|
}
|
||||||
|
|
||||||
|
owners = ["688023202711"]
|
||||||
|
}
|
||||||
|
|
||||||
**Troubleshooting**
|
**Troubleshooting**
|
||||||
|
|
||||||
|
|
|
@ -8,6 +8,8 @@ provider "aws" {
|
||||||
region = "${var.AWS_DEFAULT_REGION}"
|
region = "${var.AWS_DEFAULT_REGION}"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
data "aws_availability_zones" "available" {}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Calling modules who create the initial AWS VPC / AWS ELB
|
* Calling modules who create the initial AWS VPC / AWS ELB
|
||||||
* and AWS IAM Roles for Kubernetes Deployment
|
* and AWS IAM Roles for Kubernetes Deployment
|
||||||
|
@ -18,7 +20,7 @@ module "aws-vpc" {
|
||||||
|
|
||||||
aws_cluster_name = "${var.aws_cluster_name}"
|
aws_cluster_name = "${var.aws_cluster_name}"
|
||||||
aws_vpc_cidr_block = "${var.aws_vpc_cidr_block}"
|
aws_vpc_cidr_block = "${var.aws_vpc_cidr_block}"
|
||||||
aws_avail_zones="${var.aws_avail_zones}"
|
aws_avail_zones="${slice(data.aws_availability_zones.available.names,0,2)}"
|
||||||
aws_cidr_subnets_private="${var.aws_cidr_subnets_private}"
|
aws_cidr_subnets_private="${var.aws_cidr_subnets_private}"
|
||||||
aws_cidr_subnets_public="${var.aws_cidr_subnets_public}"
|
aws_cidr_subnets_public="${var.aws_cidr_subnets_public}"
|
||||||
default_tags="${var.default_tags}"
|
default_tags="${var.default_tags}"
|
||||||
|
@ -31,7 +33,7 @@ module "aws-elb" {
|
||||||
|
|
||||||
aws_cluster_name="${var.aws_cluster_name}"
|
aws_cluster_name="${var.aws_cluster_name}"
|
||||||
aws_vpc_id="${module.aws-vpc.aws_vpc_id}"
|
aws_vpc_id="${module.aws-vpc.aws_vpc_id}"
|
||||||
aws_avail_zones="${var.aws_avail_zones}"
|
aws_avail_zones="${slice(data.aws_availability_zones.available.names,0,2)}"
|
||||||
aws_subnet_ids_public="${module.aws-vpc.aws_subnet_ids_public}"
|
aws_subnet_ids_public="${module.aws-vpc.aws_subnet_ids_public}"
|
||||||
aws_elb_api_port = "${var.aws_elb_api_port}"
|
aws_elb_api_port = "${var.aws_elb_api_port}"
|
||||||
k8s_secure_api_port = "${var.k8s_secure_api_port}"
|
k8s_secure_api_port = "${var.k8s_secure_api_port}"
|
||||||
|
@ -49,12 +51,13 @@ module "aws-iam" {
|
||||||
* Create Bastion Instances in AWS
|
* Create Bastion Instances in AWS
|
||||||
*
|
*
|
||||||
*/
|
*/
|
||||||
|
|
||||||
resource "aws_instance" "bastion-server" {
|
resource "aws_instance" "bastion-server" {
|
||||||
ami = "${var.aws_bastion_ami}"
|
ami = "${data.aws_ami.distro.id}"
|
||||||
instance_type = "${var.aws_bastion_size}"
|
instance_type = "${var.aws_bastion_size}"
|
||||||
count = "${length(var.aws_cidr_subnets_public)}"
|
count = "${length(var.aws_cidr_subnets_public)}"
|
||||||
associate_public_ip_address = true
|
associate_public_ip_address = true
|
||||||
availability_zone = "${element(var.aws_avail_zones,count.index)}"
|
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
|
||||||
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_public,count.index)}"
|
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_public,count.index)}"
|
||||||
|
|
||||||
|
|
||||||
|
@ -76,13 +79,13 @@ resource "aws_instance" "bastion-server" {
|
||||||
*/
|
*/
|
||||||
|
|
||||||
resource "aws_instance" "k8s-master" {
|
resource "aws_instance" "k8s-master" {
|
||||||
ami = "${var.aws_cluster_ami}"
|
ami = "${data.aws_ami.distro.id}"
|
||||||
instance_type = "${var.aws_kube_master_size}"
|
instance_type = "${var.aws_kube_master_size}"
|
||||||
|
|
||||||
count = "${var.aws_kube_master_num}"
|
count = "${var.aws_kube_master_num}"
|
||||||
|
|
||||||
|
|
||||||
availability_zone = "${element(var.aws_avail_zones,count.index)}"
|
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
|
||||||
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
|
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
|
||||||
|
|
||||||
|
|
||||||
|
@ -108,13 +111,13 @@ resource "aws_elb_attachment" "attach_master_nodes" {
|
||||||
|
|
||||||
|
|
||||||
resource "aws_instance" "k8s-etcd" {
|
resource "aws_instance" "k8s-etcd" {
|
||||||
ami = "${var.aws_cluster_ami}"
|
ami = "${data.aws_ami.distro.id}"
|
||||||
instance_type = "${var.aws_etcd_size}"
|
instance_type = "${var.aws_etcd_size}"
|
||||||
|
|
||||||
count = "${var.aws_etcd_num}"
|
count = "${var.aws_etcd_num}"
|
||||||
|
|
||||||
|
|
||||||
availability_zone = "${element(var.aws_avail_zones,count.index)}"
|
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
|
||||||
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
|
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
|
||||||
|
|
||||||
|
|
||||||
|
@ -132,12 +135,12 @@ resource "aws_instance" "k8s-etcd" {
|
||||||
|
|
||||||
|
|
||||||
resource "aws_instance" "k8s-worker" {
|
resource "aws_instance" "k8s-worker" {
|
||||||
ami = "${var.aws_cluster_ami}"
|
ami = "${data.aws_ami.distro.id}"
|
||||||
instance_type = "${var.aws_kube_worker_size}"
|
instance_type = "${var.aws_kube_worker_size}"
|
||||||
|
|
||||||
count = "${var.aws_kube_worker_num}"
|
count = "${var.aws_kube_worker_num}"
|
||||||
|
|
||||||
availability_zone = "${element(var.aws_avail_zones,count.index)}"
|
availability_zone = "${element(slice(data.aws_availability_zones.available.names,0,2),count.index)}"
|
||||||
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
|
subnet_id = "${element(module.aws-vpc.aws_subnet_ids_private,count.index)}"
|
||||||
|
|
||||||
vpc_security_group_ids = [ "${module.aws-vpc.aws_security_group}" ]
|
vpc_security_group_ids = [ "${module.aws-vpc.aws_security_group}" ]
|
||||||
|
@ -162,7 +165,7 @@ resource "aws_instance" "k8s-worker" {
|
||||||
*/
|
*/
|
||||||
data "template_file" "inventory" {
|
data "template_file" "inventory" {
|
||||||
template = "${file("${path.module}/templates/inventory.tpl")}"
|
template = "${file("${path.module}/templates/inventory.tpl")}"
|
||||||
|
|
||||||
vars {
|
vars {
|
||||||
public_ip_address_bastion = "${join("\n",formatlist("bastion ansible_host=%s" , aws_instance.bastion-server.*.public_ip))}"
|
public_ip_address_bastion = "${join("\n",formatlist("bastion ansible_host=%s" , aws_instance.bastion-server.*.public_ip))}"
|
||||||
connection_strings_master = "${join("\n",formatlist("%s ansible_host=%s",aws_instance.k8s-master.*.tags.Name, aws_instance.k8s-master.*.private_ip))}"
|
connection_strings_master = "${join("\n",formatlist("%s ansible_host=%s",aws_instance.k8s-master.*.tags.Name, aws_instance.k8s-master.*.private_ip))}"
|
||||||
|
|
|
@ -5,10 +5,8 @@ aws_cluster_name = "devtest"
|
||||||
aws_vpc_cidr_block = "10.250.192.0/18"
|
aws_vpc_cidr_block = "10.250.192.0/18"
|
||||||
aws_cidr_subnets_private = ["10.250.192.0/20","10.250.208.0/20"]
|
aws_cidr_subnets_private = ["10.250.192.0/20","10.250.208.0/20"]
|
||||||
aws_cidr_subnets_public = ["10.250.224.0/20","10.250.240.0/20"]
|
aws_cidr_subnets_public = ["10.250.224.0/20","10.250.240.0/20"]
|
||||||
aws_avail_zones = ["us-west-2a","us-west-2b"]
|
|
||||||
|
|
||||||
#Bastion Host
|
#Bastion Host
|
||||||
aws_bastion_ami = "ami-db56b9a3"
|
|
||||||
aws_bastion_size = "t2.medium"
|
aws_bastion_size = "t2.medium"
|
||||||
|
|
||||||
|
|
||||||
|
@ -23,8 +21,6 @@ aws_etcd_size = "t2.medium"
|
||||||
aws_kube_worker_num = 4
|
aws_kube_worker_num = 4
|
||||||
aws_kube_worker_size = "t2.medium"
|
aws_kube_worker_size = "t2.medium"
|
||||||
|
|
||||||
aws_cluster_ami = "ami-db56b9a3"
|
|
||||||
|
|
||||||
#Settings AWS ELB
|
#Settings AWS ELB
|
||||||
|
|
||||||
aws_elb_api_port = 6443
|
aws_elb_api_port = 6443
|
||||||
|
|
|
@ -20,6 +20,21 @@ variable "aws_cluster_name" {
|
||||||
description = "Name of AWS Cluster"
|
description = "Name of AWS Cluster"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
data "aws_ami" "distro" {
|
||||||
|
most_recent = true
|
||||||
|
|
||||||
|
filter {
|
||||||
|
name = "name"
|
||||||
|
values = ["CoreOS-stable-*"]
|
||||||
|
}
|
||||||
|
|
||||||
|
filter {
|
||||||
|
name = "virtualization-type"
|
||||||
|
values = ["hvm"]
|
||||||
|
}
|
||||||
|
|
||||||
|
owners = ["595879546273"] #CoreOS
|
||||||
|
}
|
||||||
|
|
||||||
//AWS VPC Variables
|
//AWS VPC Variables
|
||||||
|
|
||||||
|
@ -27,11 +42,6 @@ variable "aws_vpc_cidr_block" {
|
||||||
description = "CIDR Block for VPC"
|
description = "CIDR Block for VPC"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "aws_avail_zones" {
|
|
||||||
description = "Availability Zones Used"
|
|
||||||
type = "list"
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "aws_cidr_subnets_private" {
|
variable "aws_cidr_subnets_private" {
|
||||||
description = "CIDR Blocks for private subnets in Availability Zones"
|
description = "CIDR Blocks for private subnets in Availability Zones"
|
||||||
type = "list"
|
type = "list"
|
||||||
|
@ -44,10 +54,6 @@ variable "aws_cidr_subnets_public" {
|
||||||
|
|
||||||
//AWS EC2 Settings
|
//AWS EC2 Settings
|
||||||
|
|
||||||
variable "aws_bastion_ami" {
|
|
||||||
description = "AMI ID for Bastion Host in chosen AWS Region"
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "aws_bastion_size" {
|
variable "aws_bastion_size" {
|
||||||
description = "EC2 Instance Size of Bastion Host"
|
description = "EC2 Instance Size of Bastion Host"
|
||||||
}
|
}
|
||||||
|
@ -81,9 +87,6 @@ variable "aws_kube_worker_size" {
|
||||||
description = "Instance size of Kubernetes Worker Nodes"
|
description = "Instance size of Kubernetes Worker Nodes"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "aws_cluster_ami" {
|
|
||||||
description = "AMI ID for Kubernetes Cluster"
|
|
||||||
}
|
|
||||||
/*
|
/*
|
||||||
* AWS ELB Settings
|
* AWS ELB Settings
|
||||||
*
|
*
|
||||||
|
|
Loading…
Reference in a new issue