[terraform] use modern day equinix metal provider (#8748)

* [terraform] use modern day equinix metal provider

* [CI] ensure packet job tests metal
This commit is contained in:
Cristian Calin 2022-04-27 20:34:13 +03:00 committed by GitHub
parent e6c4330e4e
commit 6cc5b38a2e
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
11 changed files with 58 additions and 59 deletions

View file

@ -60,11 +60,11 @@ tf-validate-openstack:
PROVIDER: openstack PROVIDER: openstack
CLUSTER: $CI_COMMIT_REF_NAME CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-packet: tf-validate-metal:
extends: .terraform_validate extends: .terraform_validate
variables: variables:
TF_VERSION: $TERRAFORM_VERSION TF_VERSION: $TERRAFORM_VERSION
PROVIDER: packet PROVIDER: metal
CLUSTER: $CI_COMMIT_REF_NAME CLUSTER: $CI_COMMIT_REF_NAME
tf-validate-aws: tf-validate-aws:

View file

@ -60,9 +60,9 @@ Terraform will be used to provision all of the Equinix Metal resources with base
Create an inventory directory for your cluster by copying the existing sample and linking the `hosts` script (used to build the inventory based on Terraform state): Create an inventory directory for your cluster by copying the existing sample and linking the `hosts` script (used to build the inventory based on Terraform state):
```ShellSession ```ShellSession
cp -LRp contrib/terraform/packet/sample-inventory inventory/$CLUSTER cp -LRp contrib/terraform/metal/sample-inventory inventory/$CLUSTER
cd inventory/$CLUSTER cd inventory/$CLUSTER
ln -s ../../contrib/terraform/packet/hosts ln -s ../../contrib/terraform/metal/hosts
``` ```
This will be the base for subsequent Terraform commands. This will be the base for subsequent Terraform commands.
@ -101,7 +101,7 @@ This helps when identifying which hosts are associated with each cluster.
While the defaults in variables.tf will successfully deploy a cluster, it is recommended to set the following values: While the defaults in variables.tf will successfully deploy a cluster, it is recommended to set the following values:
- cluster_name = the name of the inventory directory created above as $CLUSTER - cluster_name = the name of the inventory directory created above as $CLUSTER
- packet_project_id = the Equinix Metal Project ID associated with the Equinix Metal API token above - metal_project_id = the Equinix Metal Project ID associated with the Equinix Metal API token above
#### Enable localhost access #### Enable localhost access
@ -119,7 +119,7 @@ Once the Kubespray playbooks are run, a Kubernetes configuration file will be wr
In the cluster's inventory folder, the following files might be created (either by Terraform In the cluster's inventory folder, the following files might be created (either by Terraform
or manually), to prevent you from pushing them accidentally they are in a or manually), to prevent you from pushing them accidentally they are in a
`.gitignore` file in the `terraform/packet` directory : `.gitignore` file in the `terraform/metal` directory :
- `.terraform` - `.terraform`
- `.tfvars` - `.tfvars`
@ -135,7 +135,7 @@ plugins. This is accomplished as follows:
```ShellSession ```ShellSession
cd inventory/$CLUSTER cd inventory/$CLUSTER
terraform init ../../contrib/terraform/packet terraform init ../../contrib/terraform/metal
``` ```
This should finish fairly quickly telling you Terraform has successfully initialized and loaded necessary modules. This should finish fairly quickly telling you Terraform has successfully initialized and loaded necessary modules.
@ -146,7 +146,7 @@ You can apply the Terraform configuration to your cluster with the following com
issued from your cluster's inventory directory (`inventory/$CLUSTER`): issued from your cluster's inventory directory (`inventory/$CLUSTER`):
```ShellSession ```ShellSession
terraform apply -var-file=cluster.tfvars ../../contrib/terraform/packet terraform apply -var-file=cluster.tfvars ../../contrib/terraform/metal
export ANSIBLE_HOST_KEY_CHECKING=False export ANSIBLE_HOST_KEY_CHECKING=False
ansible-playbook -i hosts ../../cluster.yml ansible-playbook -i hosts ../../cluster.yml
``` ```
@ -156,7 +156,7 @@ ansible-playbook -i hosts ../../cluster.yml
You can destroy your new cluster with the following command issued from the cluster's inventory directory: You can destroy your new cluster with the following command issued from the cluster's inventory directory:
```ShellSession ```ShellSession
terraform destroy -var-file=cluster.tfvars ../../contrib/terraform/packet terraform destroy -var-file=cluster.tfvars ../../contrib/terraform/metal
``` ```
If you've started the Ansible run, it may also be a good idea to do some manual cleanup: If you've started the Ansible run, it may also be a good idea to do some manual cleanup:

View file

@ -1,16 +1,15 @@
# Configure the Equinix Metal Provider # Configure the Equinix Metal Provider
provider "packet" { provider "metal" {
version = "~> 2.0"
} }
resource "packet_ssh_key" "k8s" { resource "metal_ssh_key" "k8s" {
count = var.public_key_path != "" ? 1 : 0 count = var.public_key_path != "" ? 1 : 0
name = "kubernetes-${var.cluster_name}" name = "kubernetes-${var.cluster_name}"
public_key = chomp(file(var.public_key_path)) public_key = chomp(file(var.public_key_path))
} }
resource "packet_device" "k8s_master" { resource "metal_device" "k8s_master" {
depends_on = [packet_ssh_key.k8s] depends_on = [metal_ssh_key.k8s]
count = var.number_of_k8s_masters count = var.number_of_k8s_masters
hostname = "${var.cluster_name}-k8s-master-${count.index + 1}" hostname = "${var.cluster_name}-k8s-master-${count.index + 1}"
@ -18,12 +17,12 @@ resource "packet_device" "k8s_master" {
facilities = [var.facility] facilities = [var.facility]
operating_system = var.operating_system operating_system = var.operating_system
billing_cycle = var.billing_cycle billing_cycle = var.billing_cycle
project_id = var.packet_project_id project_id = var.metal_project_id
tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_control_plane", "etcd", "kube_node"] tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_control_plane", "etcd", "kube_node"]
} }
resource "packet_device" "k8s_master_no_etcd" { resource "metal_device" "k8s_master_no_etcd" {
depends_on = [packet_ssh_key.k8s] depends_on = [metal_ssh_key.k8s]
count = var.number_of_k8s_masters_no_etcd count = var.number_of_k8s_masters_no_etcd
hostname = "${var.cluster_name}-k8s-master-${count.index + 1}" hostname = "${var.cluster_name}-k8s-master-${count.index + 1}"
@ -31,12 +30,12 @@ resource "packet_device" "k8s_master_no_etcd" {
facilities = [var.facility] facilities = [var.facility]
operating_system = var.operating_system operating_system = var.operating_system
billing_cycle = var.billing_cycle billing_cycle = var.billing_cycle
project_id = var.packet_project_id project_id = var.metal_project_id
tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_control_plane"] tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_control_plane"]
} }
resource "packet_device" "k8s_etcd" { resource "metal_device" "k8s_etcd" {
depends_on = [packet_ssh_key.k8s] depends_on = [metal_ssh_key.k8s]
count = var.number_of_etcd count = var.number_of_etcd
hostname = "${var.cluster_name}-etcd-${count.index + 1}" hostname = "${var.cluster_name}-etcd-${count.index + 1}"
@ -44,12 +43,12 @@ resource "packet_device" "k8s_etcd" {
facilities = [var.facility] facilities = [var.facility]
operating_system = var.operating_system operating_system = var.operating_system
billing_cycle = var.billing_cycle billing_cycle = var.billing_cycle
project_id = var.packet_project_id project_id = var.metal_project_id
tags = ["cluster-${var.cluster_name}", "etcd"] tags = ["cluster-${var.cluster_name}", "etcd"]
} }
resource "packet_device" "k8s_node" { resource "metal_device" "k8s_node" {
depends_on = [packet_ssh_key.k8s] depends_on = [metal_ssh_key.k8s]
count = var.number_of_k8s_nodes count = var.number_of_k8s_nodes
hostname = "${var.cluster_name}-k8s-node-${count.index + 1}" hostname = "${var.cluster_name}-k8s-node-${count.index + 1}"
@ -57,7 +56,7 @@ resource "packet_device" "k8s_node" {
facilities = [var.facility] facilities = [var.facility]
operating_system = var.operating_system operating_system = var.operating_system
billing_cycle = var.billing_cycle billing_cycle = var.billing_cycle
project_id = var.packet_project_id project_id = var.metal_project_id
tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_node"] tags = ["cluster-${var.cluster_name}", "k8s_cluster", "kube_node"]
} }

View file

@ -0,0 +1,16 @@
output "k8s_masters" {
value = metal_device.k8s_master.*.access_public_ipv4
}
output "k8s_masters_no_etc" {
value = metal_device.k8s_master_no_etcd.*.access_public_ipv4
}
output "k8s_etcds" {
value = metal_device.k8s_etcd.*.access_public_ipv4
}
output "k8s_nodes" {
value = metal_device.k8s_node.*.access_public_ipv4
}

View file

@ -2,7 +2,7 @@
cluster_name = "mycluster" cluster_name = "mycluster"
# Your Equinix Metal project ID. See hhttps://metal.equinix.com/developers/docs/accounts/ # Your Equinix Metal project ID. See hhttps://metal.equinix.com/developers/docs/accounts/
packet_project_id = "Example-API-Token" metal_project_id = "Example-API-Token"
# The public SSH key to be uploaded into authorized_keys in bare metal Equinix Metal nodes provisioned # The public SSH key to be uploaded into authorized_keys in bare metal Equinix Metal nodes provisioned
# leave this value blank if the public key is already setup in the Equinix Metal project # leave this value blank if the public key is already setup in the Equinix Metal project

View file

@ -2,12 +2,12 @@ variable "cluster_name" {
default = "kubespray" default = "kubespray"
} }
variable "packet_project_id" { variable "metal_project_id" {
description = "Your Equinix Metal project ID. See https://metal.equinix.com/developers/docs/accounts/" description = "Your Equinix Metal project ID. See https://metal.equinix.com/developers/docs/accounts/"
} }
variable "operating_system" { variable "operating_system" {
default = "ubuntu_16_04" default = "ubuntu_20_04"
} }
variable "public_key_path" { variable "public_key_path" {
@ -24,23 +24,23 @@ variable "facility" {
} }
variable "plan_k8s_masters" { variable "plan_k8s_masters" {
default = "c2.medium.x86" default = "c3.small.x86"
} }
variable "plan_k8s_masters_no_etcd" { variable "plan_k8s_masters_no_etcd" {
default = "c2.medium.x86" default = "c3.small.x86"
} }
variable "plan_etcd" { variable "plan_etcd" {
default = "c2.medium.x86" default = "c3.small.x86"
} }
variable "plan_k8s_nodes" { variable "plan_k8s_nodes" {
default = "c2.medium.x86" default = "c3.medium.x86"
} }
variable "number_of_k8s_masters" { variable "number_of_k8s_masters" {
default = 0 default = 1
} }
variable "number_of_k8s_masters_no_etcd" { variable "number_of_k8s_masters_no_etcd" {
@ -52,6 +52,6 @@ variable "number_of_etcd" {
} }
variable "number_of_k8s_nodes" { variable "number_of_k8s_nodes" {
default = 0 default = 1
} }

View file

@ -2,8 +2,8 @@
terraform { terraform {
required_version = ">= 0.12" required_version = ">= 0.12"
required_providers { required_providers {
packet = { metal = {
source = "terraform-providers/packet" source = "equinix/metal"
} }
} }
} }

View file

@ -1,16 +0,0 @@
output "k8s_masters" {
value = packet_device.k8s_master.*.access_public_ipv4
}
output "k8s_masters_no_etc" {
value = packet_device.k8s_master_no_etcd.*.access_public_ipv4
}
output "k8s_etcds" {
value = packet_device.k8s_etcd.*.access_public_ipv4
}
output "k8s_nodes" {
value = packet_device.k8s_node.*.access_public_ipv4
}

View file

@ -195,8 +195,8 @@ def parse_bool(string_form):
raise ValueError('could not convert %r to a bool' % string_form) raise ValueError('could not convert %r to a bool' % string_form)
@parses('packet_device') @parses('metal_device')
def packet_device(resource, tfvars=None): def metal_device(resource, tfvars=None):
raw_attrs = resource['primary']['attributes'] raw_attrs = resource['primary']['attributes']
name = raw_attrs['hostname'] name = raw_attrs['hostname']
groups = [] groups = []
@ -213,14 +213,14 @@ def packet_device(resource, tfvars=None):
'state': raw_attrs['state'], 'state': raw_attrs['state'],
# ansible # ansible
'ansible_ssh_host': raw_attrs['network.0.address'], 'ansible_ssh_host': raw_attrs['network.0.address'],
'ansible_ssh_user': 'root', # Use root by default in packet 'ansible_ssh_user': 'root', # Use root by default in metal
# generic # generic
'ipv4_address': raw_attrs['network.0.address'], 'ipv4_address': raw_attrs['network.0.address'],
'public_ipv4': raw_attrs['network.0.address'], 'public_ipv4': raw_attrs['network.0.address'],
'ipv6_address': raw_attrs['network.1.address'], 'ipv6_address': raw_attrs['network.1.address'],
'public_ipv6': raw_attrs['network.1.address'], 'public_ipv6': raw_attrs['network.1.address'],
'private_ipv4': raw_attrs['network.2.address'], 'private_ipv4': raw_attrs['network.2.address'],
'provider': 'packet', 'provider': 'metal',
} }
if raw_attrs['operating_system'] == 'flatcar_stable': if raw_attrs['operating_system'] == 'flatcar_stable':
@ -228,10 +228,10 @@ def packet_device(resource, tfvars=None):
attrs.update({'ansible_ssh_user': 'core'}) attrs.update({'ansible_ssh_user': 'core'})
# add groups based on attrs # add groups based on attrs
groups.append('packet_operating_system=' + attrs['operating_system']) groups.append('metal_operating_system=' + attrs['operating_system'])
groups.append('packet_locked=%s' % attrs['locked']) groups.append('metal_locked=%s' % attrs['locked'])
groups.append('packet_state=' + attrs['state']) groups.append('metal_state=' + attrs['state'])
groups.append('packet_plan=' + attrs['plan']) groups.append('metal_plan=' + attrs['plan'])
# groups specific to kubespray # groups specific to kubespray
groups = groups + attrs['tags'] groups = groups + attrs['tags']