fix-up some spelling mistakes (#5202)

This commit is contained in:
陈谭军 2019-09-26 14:27:08 +08:00 committed by Kubernetes Prow Robot
parent 1cf6a99df4
commit 3bcdf46937
10 changed files with 10 additions and 10 deletions

View File

@ -5,7 +5,7 @@ To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provi
Prior to creating your instances, you **must** ensure that you have created IAM roles and policies for both "kubernetes-master" and "kubernetes-node". You can find the IAM policies [here](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/aws_iam/). See the [IAM Documentation](https://aws.amazon.com/documentation/iam/) if guidance is needed on how to set these up. When you bring your instances online, associate them with the respective IAM role. Nodes that are only to be used for Etcd do not need a role.
You would also need to tag the resources in your VPC accordingly for the aws provider to utilize them. Tag the subnets, route tables and all instances that kubernetes will be run on with key `kubernetes.io/cluster/$cluster_name` (`$cluster_name` must be a unique identifier for the cluster). Tag the subnets that must be targetted by external ELBs with the key `kubernetes.io/role/elb` and internal ELBs with the key `kubernetes.io/role/internal-elb`.
You would also need to tag the resources in your VPC accordingly for the aws provider to utilize them. Tag the subnets, route tables and all instances that kubernetes will be run on with key `kubernetes.io/cluster/$cluster_name` (`$cluster_name` must be a unique identifier for the cluster). Tag the subnets that must be targeted by external ELBs with the key `kubernetes.io/role/elb` and internal ELBs with the key `kubernetes.io/role/internal-elb`.
Make sure your VPC has both DNS Hostnames support and Private DNS enabled.

View File

@ -40,7 +40,7 @@
...
```
8. Copy and modify configs from kubespray `group_vars` folder to corresponging `group_vars` folder in your existent project.
8. Copy and modify configs from kubespray `group_vars` folder to corresponding `group_vars` folder in your existent project.
You could rename *all.yml* config to something else, i.e. *kubespray.yml* and create corresponding group in your inventory file, which will include all hosts groups related to kubernetes setup.
9. Modify your ansible inventory file by adding mapping of your existent groups (if any) to kubespray naming.

View File

@ -19,7 +19,7 @@ Kubespray's roadmap
- [ ] On AWS autoscaling, multi AZ
- [ ] On Azure autoscaling, create loadbalancer [#297](https://github.com/kubespray/kubespray/issues/297)
- [ ] On GCE be able to create a loadbalancer automatically (IAM ?) [#280](https://github.com/kubespray/kubespray/issues/280)
- [x] **TLS boostrap** support for kubelet (covered by kubeadm, but not in standard deployment) [#234](https://github.com/kubespray/kubespray/issues/234)
- [x] **TLS bootstrap** support for kubelet (covered by kubeadm, but not in standard deployment) [#234](https://github.com/kubespray/kubespray/issues/234)
(related issues: https://github.com/kubernetes/kubernetes/pull/20439 <br>
https://github.com/kubernetes/kubernetes/issues/18112)

View File

@ -47,7 +47,7 @@ git checkout origin/master
ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=v1.6.0
```
After a successul upgrade, the Server Version should be updated:
After a successful upgrade, the Server Version should be updated:
```
$ kubectl version

View File

@ -72,7 +72,7 @@ This mode is best to use on static size cluster
This mode is best to use on dynamic size cluster
The seed mode also allows multi-clouds and hybrid on-premise/cloud clusters deployement.
The seed mode also allows multi-clouds and hybrid on-premise/cloud clusters deployment.
* Switch from consensus mode to seed mode

View File

@ -8,7 +8,7 @@
# oci_vnc_id:
# oci_subnet1_id:
# oci_subnet2_id:
## Overide these default/optional behaviors if you wish
## Overrideeeeeeee these default/optional behaviors if you wish
# oci_security_list_management: All
## If you would like the controller to manage specific lists per subnet. This is a mapping of subnet ocids to security list ocids. Below are examples.
# oci_security_lists:

View File

@ -1,7 +1,7 @@
# see roles/network_plugin/canal/defaults/main.yml
# The interface used by canal for host <-> host communication.
# If left blank, then the interface is chosing using the node's
# If left blank, then the interface is choosing using the node's
# default route.
# canal_iface: ""

View File

@ -1,7 +1,7 @@
---
docker_kernel_min_version: '0'
# overide defaults, missing 17.03 for aarch64
# Override defaults, missing 17.03 for aarch64
docker_version: '1.13'
# http://mirror.centos.org/altarch/7/extras/aarch64/Packages/

View File

@ -1,6 +1,6 @@
---
# The interface used by canal for host <-> host communication.
# If left blank, then the interface is chosing using the node's
# If left blank, then the interface is choosing using the node's
# default route.
canal_iface: ""

View File

@ -10,7 +10,7 @@ data:
etcd_endpoints: "{{ etcd_access_addresses }}"
# The interface used by canal for host <-> host communication.
# If left blank, then the interface is chosing using the node's
# If left blank, then the interface is choosing using the node's
# default route.
flanneld_iface: "{{ canal_iface }}"