Added Missing AWS IAM Profiles and Policies
The AWS IAM profiles and policies required to run Kargo on AWS are no longer hosted in the kubernetes main repo since kube-up got deprecated. Hence we have to move the files into the kargo repository.
This commit is contained in:
parent
3256f4bc0f
commit
5ae85b9de5
5 changed files with 93 additions and 1 deletions
27
contrib/aws_iam/kubernetes-master-policy.json
Normal file
27
contrib/aws_iam/kubernetes-master-policy.json
Normal file
|
@ -0,0 +1,27 @@
|
||||||
|
{
|
||||||
|
"Version": "2012-10-17",
|
||||||
|
"Statement": [
|
||||||
|
{
|
||||||
|
"Effect": "Allow",
|
||||||
|
"Action": ["ec2:*"],
|
||||||
|
"Resource": ["*"]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Effect": "Allow",
|
||||||
|
"Action": ["elasticloadbalancing:*"],
|
||||||
|
"Resource": ["*"]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Effect": "Allow",
|
||||||
|
"Action": ["route53:*"],
|
||||||
|
"Resource": ["*"]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Effect": "Allow",
|
||||||
|
"Action": "s3:*",
|
||||||
|
"Resource": [
|
||||||
|
"arn:aws:s3:::kubernetes-*"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
10
contrib/aws_iam/kubernetes-master-role.json
Normal file
10
contrib/aws_iam/kubernetes-master-role.json
Normal file
|
@ -0,0 +1,10 @@
|
||||||
|
{
|
||||||
|
"Version": "2012-10-17",
|
||||||
|
"Statement": [
|
||||||
|
{
|
||||||
|
"Effect": "Allow",
|
||||||
|
"Principal": { "Service": "ec2.amazonaws.com"},
|
||||||
|
"Action": "sts:AssumeRole"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
45
contrib/aws_iam/kubernetes-minion-policy.json
Normal file
45
contrib/aws_iam/kubernetes-minion-policy.json
Normal file
|
@ -0,0 +1,45 @@
|
||||||
|
{
|
||||||
|
"Version": "2012-10-17",
|
||||||
|
"Statement": [
|
||||||
|
{
|
||||||
|
"Effect": "Allow",
|
||||||
|
"Action": "s3:*",
|
||||||
|
"Resource": [
|
||||||
|
"arn:aws:s3:::kubernetes-*"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Effect": "Allow",
|
||||||
|
"Action": "ec2:Describe*",
|
||||||
|
"Resource": "*"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Effect": "Allow",
|
||||||
|
"Action": "ec2:AttachVolume",
|
||||||
|
"Resource": "*"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Effect": "Allow",
|
||||||
|
"Action": "ec2:DetachVolume",
|
||||||
|
"Resource": "*"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Effect": "Allow",
|
||||||
|
"Action": ["route53:*"],
|
||||||
|
"Resource": ["*"]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Effect": "Allow",
|
||||||
|
"Action": [
|
||||||
|
"ecr:GetAuthorizationToken",
|
||||||
|
"ecr:BatchCheckLayerAvailability",
|
||||||
|
"ecr:GetDownloadUrlForLayer",
|
||||||
|
"ecr:GetRepositoryPolicy",
|
||||||
|
"ecr:DescribeRepositories",
|
||||||
|
"ecr:ListImages",
|
||||||
|
"ecr:BatchGetImage"
|
||||||
|
],
|
||||||
|
"Resource": "*"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
10
contrib/aws_iam/kubernetes-minion-role.json
Normal file
10
contrib/aws_iam/kubernetes-minion-role.json
Normal file
|
@ -0,0 +1,10 @@
|
||||||
|
{
|
||||||
|
"Version": "2012-10-17",
|
||||||
|
"Statement": [
|
||||||
|
{
|
||||||
|
"Effect": "Allow",
|
||||||
|
"Principal": { "Service": "ec2.amazonaws.com"},
|
||||||
|
"Action": "sts:AssumeRole"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
|
@ -3,7 +3,7 @@ AWS
|
||||||
|
|
||||||
To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'aws'`.
|
To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'aws'`.
|
||||||
|
|
||||||
Prior to creating your instances, you **must** ensure that you have created IAM roles and policies for both "kubernetes-master" and "kubernetes-node". You can find the IAM policies [here](https://github.com/kubernetes/kubernetes/tree/master/cluster/aws/templates/iam). See the [IAM Documentation](https://aws.amazon.com/documentation/iam/) if guidance is needed on how to set these up. When you bring your instances online, associate them with the respective IAM role. Nodes that are only to be used for Etcd do not need a role.
|
Prior to creating your instances, you **must** ensure that you have created IAM roles and policies for both "kubernetes-master" and "kubernetes-node". You can find the IAM policies [here](https://github.com/kubernetes-incubator/kargo/tree/master/contrib/aws_iam/). See the [IAM Documentation](https://aws.amazon.com/documentation/iam/) if guidance is needed on how to set these up. When you bring your instances online, associate them with the respective IAM role. Nodes that are only to be used for Etcd do not need a role.
|
||||||
|
|
||||||
The next step is to make sure the hostnames in your `inventory` file are identical to your internal hostnames in AWS. This may look something like `ip-111-222-333-444.us-west-2.compute.internal`. You can then specify how Ansible connects to these instances with `ansible_ssh_host` and `ansible_ssh_user`.
|
The next step is to make sure the hostnames in your `inventory` file are identical to your internal hostnames in AWS. This may look something like `ip-111-222-333-444.us-west-2.compute.internal`. You can then specify how Ansible connects to these instances with `ansible_ssh_host` and `ansible_ssh_user`.
|
||||||
|
|
||||||
|
|
Loading…
Reference in a new issue