Merge pull request #434 from kubespray/issue-426
Check only for AWS, wrote some docs on actually using AWS
This commit is contained in:
commit
bcec5553c5
5 changed files with 18 additions and 2 deletions
|
@ -25,6 +25,7 @@ To deploy the cluster you can use :
|
|||
* [Ansible variables](docs/ansible.md)
|
||||
* [Cloud providers](docs/cloud.md)
|
||||
* [OpenStack](docs/openstack.md)
|
||||
* [AWS](docs/aws.md)
|
||||
* [Network plugins](#network-plugins)
|
||||
* [Roadmap](docs/roadmap.md)
|
||||
|
||||
|
|
10
docs/aws.md
Normal file
10
docs/aws.md
Normal file
|
@ -0,0 +1,10 @@
|
|||
AWS
|
||||
===============
|
||||
|
||||
To deploy kubespray on [AWS](https://www.openstack.org/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'aws'`.
|
||||
|
||||
Prior to creating your instances, you **must** ensure that you have created IAM roles and policies for both "kubernetes-master" and "kubernetes-node". You can find the IAM policies [here](https://github.com/kubernetes/kubernetes/tree/master/cluster/aws/templates/iam). See the [IAM Documentation](https://aws.amazon.com/documentation/iam/) if guidance is needed on how to set these up. When you bring your instances online, associate them with the respective IAM role. Nodes that are only to be used for Etcd do not need a role.
|
||||
|
||||
The next step is to make sure the hostnames in your `inventory` file are identical to your internal hostnames in AWS. This may look something like `ip-111-222-333-444.us-west-2.compute.internal`. You can then specify how Ansible connects to these instances with `ansible_ssh_host` and `ansible_ssh_user`.
|
||||
|
||||
You can now create your cluster!
|
|
@ -35,6 +35,8 @@ spec:
|
|||
{% if cloud_provider is defined and cloud_provider == "openstack" %}
|
||||
- --cloud-provider={{ cloud_provider }}
|
||||
- --cloud-config={{ kube_config_dir }}/cloud_config
|
||||
{% elif cloud_provider is defined and cloud_provider == "aws" %}
|
||||
- --cloud-provider={{ cloud_provider }}
|
||||
{% endif %}
|
||||
- 2>&1 >> {{ kube_log_dir }}/kube-apiserver.log
|
||||
volumeMounts:
|
||||
|
|
|
@ -18,8 +18,10 @@ spec:
|
|||
- --enable-hostpath-provisioner={{ kube_hostpath_dynamic_provisioner }}
|
||||
- --v={{ kube_log_level | default('2') }}
|
||||
{% if cloud_provider is defined and cloud_provider == "openstack" %}
|
||||
- --cloud-provider=openstack
|
||||
- --cloud-provider={{cloud_provider}}
|
||||
- --cloud-config={{ kube_config_dir }}/cloud_config
|
||||
{% elif cloud_provider is defined and cloud_provider == "aws" %}
|
||||
- --cloud-provider={{cloud_provider}}
|
||||
{% endif %}
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
|
|
|
@ -33,8 +33,9 @@ DOCKER_SOCKET="--docker-endpoint=unix:/var/run/weave/weave.sock"
|
|||
KUBE_ALLOW_PRIV="--allow-privileged=true"
|
||||
{% if cloud_provider is defined and cloud_provider == "openstack" %}
|
||||
KUBELET_CLOUDPROVIDER="--cloud-provider={{ cloud_provider }} --cloud-config={{ kube_config_dir }}/cloud_config"
|
||||
{% elif cloud_provider is defined and cloud_provider == "aws" %}
|
||||
KUBELET_CLOUDPROVIDER="--cloud-provider={{ cloud_provider }}"
|
||||
{% else %}
|
||||
{# TODO: gce and aws don't need the cloud provider to be set? #}
|
||||
KUBELET_CLOUDPROVIDER=""
|
||||
{% endif %}
|
||||
{% if ansible_service_mgr in ["sysvinit","upstart"] %}
|
||||
|
|
Loading…
Reference in a new issue