From 5c7e309d13ec1c096ada65fd494a28a2ae8a2106 Mon Sep 17 00:00:00 2001 From: Raj Perera Date: Tue, 11 Jul 2017 10:53:19 -0400 Subject: [PATCH] Add more instructions to setting up AWS provider --- docs/aws.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/docs/aws.md b/docs/aws.md index 8bdbc06fa..e1e81331e 100644 --- a/docs/aws.md +++ b/docs/aws.md @@ -5,6 +5,10 @@ To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provi Prior to creating your instances, you **must** ensure that you have created IAM roles and policies for both "kubernetes-master" and "kubernetes-node". You can find the IAM policies [here](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/aws_iam/). See the [IAM Documentation](https://aws.amazon.com/documentation/iam/) if guidance is needed on how to set these up. When you bring your instances online, associate them with the respective IAM role. Nodes that are only to be used for Etcd do not need a role. +You would also need to tag the resources in your VPC accordingly for the aws provider to utilize them. Tag the subnets and all instances that kubernetes will be run on with key `kuberentes.io/cluster/$cluster_name` (`$cluster_name` must be a unique identifier for the cluster). Tag the subnets that must be targetted by external ELBs with the key `kubernetes.io/role/elb` and internal ELBs with the key `kubernetes.io/role/internal-elb`. + +Make sure your VPC has both DNS Hostnames support and Private DNS enabled. + The next step is to make sure the hostnames in your `inventory` file are identical to your internal hostnames in AWS. This may look something like `ip-111-222-333-444.us-west-2.compute.internal`. You can then specify how Ansible connects to these instances with `ansible_ssh_host` and `ansible_ssh_user`. You can now create your cluster!