fix docs/<file>.md errors identified by markdownlint

*	docs/azure-csi.md
* docs/azure.md
* docs/bootstrap-os.md
*	docs/calico.md
* docs/debian.md
* docs/fcos.md
*	docs/vagrant.md
* docs/gcp-lb.md
* docs/kubernetes-apps/registry.md
* docs/setting-up-your-first-cluster.md
* docs/vagrant.md
*	docs/vars.md
This commit is contained in:
Calin Cristian Andrei 2022-08-07 12:16:58 +00:00
parent b074b91ee9
commit 4a994c82d1
11 changed files with 78 additions and 60 deletions

View file

@ -57,19 +57,28 @@ The name of the network security group your instances are in, can be retrieved v
These will have to be generated first: These will have to be generated first:
- Create an Azure AD Application with: - Create an Azure AD Application with:
`az ad app create --display-name kubespray --identifier-uris http://kubespray --homepage http://kubespray.com --password CLIENT_SECRET`
```ShellSession
az ad app create --display-name kubespray --identifier-uris http://kubespray --homepage http://kubespray.com --password CLIENT_SECRET
```
Display name, identifier-uri, homepage and the password can be chosen Display name, identifier-uri, homepage and the password can be chosen
Note the AppId in the output. Note the AppId in the output.
- Create Service principal for the application with: - Create Service principal for the application with:
`az ad sp create --id AppId`
```ShellSession
az ad sp create --id AppId
```
This is the AppId from the last command This is the AppId from the last command
- Create the role assignment with: - Create the role assignment with:
`az role assignment create --role "Owner" --assignee http://kubespray --subscription SUBSCRIPTION_ID`
```ShellSession
az role assignment create --role "Owner" --assignee http://kubespray --subscription SUBSCRIPTION_ID
```
azure\_csi\_aad\_client\_id must be set to the AppId, azure\_csi\_aad\_client\_secret is your chosen secret. azure\_csi\_aad\_client\_id must be set to the AppId, azure\_csi\_aad\_client\_secret is your chosen secret.

View file

@ -71,14 +71,27 @@ The name of the resource group that contains the route table. Defaults to `azur
These will have to be generated first: These will have to be generated first:
- Create an Azure AD Application with: - Create an Azure AD Application with:
`az ad app create --display-name kubernetes --identifier-uris http://kubernetes --homepage http://example.com --password CLIENT_SECRET`
```ShellSession
az ad app create --display-name kubernetes --identifier-uris http://kubernetes --homepage http://example.com --password CLIENT_SECRET
```
display name, identifier-uri, homepage and the password can be chosen display name, identifier-uri, homepage and the password can be chosen
Note the AppId in the output. Note the AppId in the output.
- Create Service principal for the application with: - Create Service principal for the application with:
`az ad sp create --id AppId`
```ShellSession
az ad sp create --id AppId
```
This is the AppId from the last command This is the AppId from the last command
- Create the role assignment with: - Create the role assignment with:
`az role assignment create --role "Owner" --assignee http://kubernetes --subscription SUBSCRIPTION_ID`
```ShellSession
az role assignment create --role "Owner" --assignee http://kubernetes --subscription SUBSCRIPTION_ID
```
azure\_aad\_client\_id must be set to the AppId, azure\_aad\_client\_secret is your chosen secret. azure\_aad\_client\_id must be set to the AppId, azure\_aad\_client\_secret is your chosen secret.

View file

@ -48,11 +48,13 @@ The `kubespray-defaults` role is expected to be run before this role.
Remember to disable fact gathering since Python might not be present on hosts. Remember to disable fact gathering since Python might not be present on hosts.
- hosts: all ```yaml
gather_facts: false # not all hosts might be able to run modules yet - hosts: all
roles: gather_facts: false # not all hosts might be able to run modules yet
- kubespray-defaults roles:
- bootstrap-os - kubespray-defaults
- bootstrap-os
```
## License ## License

View file

@ -124,8 +124,7 @@ You need to edit your inventory and add:
* `calico_rr` group with nodes in it. `calico_rr` can be combined with * `calico_rr` group with nodes in it. `calico_rr` can be combined with
`kube_node` and/or `kube_control_plane`. `calico_rr` group also must be a child `kube_node` and/or `kube_control_plane`. `calico_rr` group also must be a child
group of `k8s_cluster` group. group of `k8s_cluster` group.
* `cluster_id` by route reflector node/group (see details * `cluster_id` by route reflector node/group (see details [here](https://hub.docker.com/r/calico/routereflector/))
[here](https://hub.docker.com/r/calico/routereflector/))
Here's an example of Kubespray inventory with standalone route reflectors: Here's an example of Kubespray inventory with standalone route reflectors:

View file

@ -3,34 +3,39 @@
Debian Jessie installation Notes: Debian Jessie installation Notes:
- Add - Add
```GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"``` ```ini
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
to /etc/default/grub. Then update with
```ShellSession
sudo update-grub
sudo update-grub2
sudo reboot
``` ```
to `/etc/default/grub`. Then update with
```ShellSession
sudo update-grub
sudo update-grub2
sudo reboot
```
- Add the [backports](https://backports.debian.org/Instructions/) which contain Systemd 2.30 and update Systemd. - Add the [backports](https://backports.debian.org/Instructions/) which contain Systemd 2.30 and update Systemd.
```apt-get -t jessie-backports install systemd``` ```ShellSession
apt-get -t jessie-backports install systemd
```
(Necessary because the default Systemd version (2.15) does not support the "Delegate" directive in service files) (Necessary because the default Systemd version (2.15) does not support the "Delegate" directive in service files)
- Add the Ansible repository and install Ansible to get a proper version - Add the Ansible repository and install Ansible to get a proper version
```ShellSession ```ShellSession
sudo add-apt-repository ppa:ansible/ansible sudo add-apt-repository ppa:ansible/ansible
sudo apt-get update sudo apt-get update
sudo apt-get install ansible sudo apt-get install ansible
``` ```
- Install Jinja2 and Python-Netaddr - Install Jinja2 and Python-Netaddr
```sudo apt-get install python-jinja2=2.8-1~bpo8+1 python-netaddr``` ```ShellSession
sudo apt-get install python-jinja2=2.8-1~bpo8+1 python-netaddr
```
Now you can continue with [Preparing your deployment](getting-started.md#starting-custom-deployment) Now you can continue with [Preparing your deployment](getting-started.md#starting-custom-deployment)

View file

@ -54,7 +54,7 @@ Prepare ignition and serve via http (a.e. python -m http.server )
### create guest ### create guest
```shell script ```ShellSeasion
machine_name=myfcos1 machine_name=myfcos1
ignition_url=http://mywebserver/fcos.ign ignition_url=http://mywebserver/fcos.ign

View file

@ -2,15 +2,19 @@
Google Cloud Platform can be used for creation of Kubernetes Service Load Balancer. Google Cloud Platform can be used for creation of Kubernetes Service Load Balancer.
This feature is able to deliver by adding parameters to kube-controller-manager and kubelet. You need specify: This feature is able to deliver by adding parameters to `kube-controller-manager` and `kubelet`. You need specify:
```
--cloud-provider=gce --cloud-provider=gce
--cloud-config=/etc/kubernetes/cloud-config --cloud-config=/etc/kubernetes/cloud-config
```
To get working it in kubespray, you need to add tag to GCE instances and specify it in kubespray group vars and also set cloud_provider to gce. So for example, in file group_vars/all/gcp.yml: To get working it in kubespray, you need to add tag to GCE instances and specify it in kubespray group vars and also set `cloud_provider` to `gce`. So for example, in file `group_vars/all/gcp.yml`:
```
cloud_provider: gce cloud_provider: gce
gce_node_tags: k8s-lb gce_node_tags: k8s-lb
```
When you will setup it and create SVC in Kubernetes with type=LoadBalancer, cloud provider will create public IP and will set firewall. When you will setup it and create SVC in Kubernetes with `type=LoadBalancer`, cloud provider will create public IP and will set firewall.
Note: Cloud provider run under VM service account, so this account needs to have correct permissions to be able to create all GCP resources. Note: Cloud provider run under VM service account, so this account needs to have correct permissions to be able to create all GCP resources.

View file

@ -29,8 +29,7 @@ use Kubernetes's `PersistentVolume` abstraction. The following template is
expanded by `salt` in the GCE cluster turnup, but can easily be adapted to expanded by `salt` in the GCE cluster turnup, but can easily be adapted to
other situations: other situations:
<!-- BEGIN MUNGE: EXAMPLE registry-pv.yaml.in --> ```yaml
``` yaml
kind: PersistentVolume kind: PersistentVolume
apiVersion: v1 apiVersion: v1
metadata: metadata:
@ -46,7 +45,6 @@ spec:
fsType: "ext4" fsType: "ext4"
{% endif %} {% endif %}
``` ```
<!-- END MUNGE: EXAMPLE registry-pv.yaml.in -->
If, for example, you wanted to use NFS you would just need to change the If, for example, you wanted to use NFS you would just need to change the
`gcePersistentDisk` block to `nfs`. See `gcePersistentDisk` block to `nfs`. See
@ -68,8 +66,7 @@ Now that the Kubernetes cluster knows that some storage exists, you can put a
claim on that storage. As with the `PersistentVolume` above, you can start claim on that storage. As with the `PersistentVolume` above, you can start
with the `salt` template: with the `salt` template:
<!-- BEGIN MUNGE: EXAMPLE registry-pvc.yaml.in --> ```yaml
``` yaml
kind: PersistentVolumeClaim kind: PersistentVolumeClaim
apiVersion: v1 apiVersion: v1
metadata: metadata:
@ -82,7 +79,6 @@ spec:
requests: requests:
storage: {{ pillar['cluster_registry_disk_size'] }} storage: {{ pillar['cluster_registry_disk_size'] }}
``` ```
<!-- END MUNGE: EXAMPLE registry-pvc.yaml.in -->
This tells Kubernetes that you want to use storage, and the `PersistentVolume` This tells Kubernetes that you want to use storage, and the `PersistentVolume`
you created before will be bound to this claim (unless you have other you created before will be bound to this claim (unless you have other
@ -93,8 +89,7 @@ gives you the right to use this storage until you release the claim.
Now we can run a Docker registry: Now we can run a Docker registry:
<!-- BEGIN MUNGE: EXAMPLE registry-rc.yaml --> ```yaml
``` yaml
apiVersion: v1 apiVersion: v1
kind: ReplicationController kind: ReplicationController
metadata: metadata:
@ -138,7 +133,6 @@ spec:
persistentVolumeClaim: persistentVolumeClaim:
claimName: kube-registry-pvc claimName: kube-registry-pvc
``` ```
<!-- END MUNGE: EXAMPLE registry-rc.yaml -->
*Note:* that if you have set multiple replicas, make sure your CSI driver has support for the `ReadWriteMany` accessMode. *Note:* that if you have set multiple replicas, make sure your CSI driver has support for the `ReadWriteMany` accessMode.
@ -146,8 +140,7 @@ spec:
Now that we have a registry `Pod` running, we can expose it as a Service: Now that we have a registry `Pod` running, we can expose it as a Service:
<!-- BEGIN MUNGE: EXAMPLE registry-svc.yaml --> ```yaml
``` yaml
apiVersion: v1 apiVersion: v1
kind: Service kind: Service
metadata: metadata:
@ -164,7 +157,6 @@ spec:
port: 5000 port: 5000
protocol: TCP protocol: TCP
``` ```
<!-- END MUNGE: EXAMPLE registry-svc.yaml -->
## Expose the registry on each node ## Expose the registry on each node
@ -172,8 +164,7 @@ Now that we have a running `Service`, we need to expose it onto each Kubernetes
`Node` so that Docker will see it as `localhost`. We can load a `Pod` on every `Node` so that Docker will see it as `localhost`. We can load a `Pod` on every
node by creating following daemonset. node by creating following daemonset.
<!-- BEGIN MUNGE: EXAMPLE ../../saltbase/salt/kube-registry-proxy/kube-registry-proxy.yaml --> ```yaml
``` yaml
apiVersion: apps/v1 apiVersion: apps/v1
kind: DaemonSet kind: DaemonSet
metadata: metadata:
@ -207,7 +198,6 @@ spec:
containerPort: 80 containerPort: 80
hostPort: 5000 hostPort: 5000
``` ```
<!-- END MUNGE: EXAMPLE ../../saltbase/salt/kube-registry-proxy/kube-registry-proxy.yaml -->
When modifying replication-controller, service and daemon-set definitions, take When modifying replication-controller, service and daemon-set definitions, take
care to ensure *unique* identifiers for the rc-svc couple and the daemon-set. care to ensure *unique* identifiers for the rc-svc couple and the daemon-set.
@ -219,7 +209,7 @@ This ensures that port 5000 on each node is directed to the registry `Service`.
You should be able to verify that it is running by hitting port 5000 with a web You should be able to verify that it is running by hitting port 5000 with a web
browser and getting a 404 error: browser and getting a 404 error:
``` console ```ShellSession
$ curl localhost:5000 $ curl localhost:5000
404 page not found 404 page not found
``` ```
@ -229,7 +219,7 @@ $ curl localhost:5000
To use an image hosted by this registry, simply say this in your `Pod`'s To use an image hosted by this registry, simply say this in your `Pod`'s
`spec.containers[].image` field: `spec.containers[].image` field:
``` yaml ```yaml
image: localhost:5000/user/container image: localhost:5000/user/container
``` ```
@ -241,7 +231,7 @@ building locally and want to push to your cluster.
You can use `kubectl` to set up a port-forward from your local node to a You can use `kubectl` to set up a port-forward from your local node to a
running Pod: running Pod:
``` console ```ShellSession
$ POD=$(kubectl get pods --namespace kube-system -l k8s-app=registry \ $ POD=$(kubectl get pods --namespace kube-system -l k8s-app=registry \
-o template --template '{{range .items}}{{.metadata.name}} {{.status.phase}}{{"\n"}}{{end}}' \ -o template --template '{{range .items}}{{.metadata.name}} {{.status.phase}}{{"\n"}}{{end}}' \
| grep Running | head -1 | cut -f1 -d' ') | grep Running | head -1 | cut -f1 -d' ')

View file

@ -252,11 +252,7 @@ Ansible will now execute the playbook, this can take up to 20 minutes.
We will leverage a kubeconfig file from one of the controller nodes to access We will leverage a kubeconfig file from one of the controller nodes to access
the cluster as administrator from our local workstation. the cluster as administrator from our local workstation.
> In this simplified set-up, we did not include a load balancer that usually > In this simplified set-up, we did not include a load balancer that usually sits on top of the three controller nodes for a high available API server endpoint. In this simplified tutorial we connect directly to one of the three controllers.
sits on top of the
three controller nodes for a high available API server endpoint. In this
simplified tutorial we connect directly to one of the three
controllers.
First, we need to edit the permission of the kubeconfig file on one of the First, we need to edit the permission of the kubeconfig file on one of the
controller nodes: controller nodes:

View file

@ -58,7 +58,7 @@ see [download documentation](/docs/downloads.md).
The following is an example of setting up and running kubespray using `vagrant`. The following is an example of setting up and running kubespray using `vagrant`.
For repeated runs, you could save the script to a file in the root of the For repeated runs, you could save the script to a file in the root of the
kubespray and run it by executing 'source <name_of_the_file>. kubespray and run it by executing `source <name_of_the_file>`.
```ShellSession ```ShellSession
# use virtualenv to install all python requirements # use virtualenv to install all python requirements

View file

@ -81,7 +81,7 @@ following default cluster parameters:
raise an assertion in playbooks if the `kubelet_max_pods` var also isn't adjusted accordingly raise an assertion in playbooks if the `kubelet_max_pods` var also isn't adjusted accordingly
(assertion not applicable to calico which doesn't use this as a hard limit, see (assertion not applicable to calico which doesn't use this as a hard limit, see
[Calico IP block sizes](https://docs.projectcalico.org/reference/resources/ippool#block-sizes). [Calico IP block sizes](https://docs.projectcalico.org/reference/resources/ippool#block-sizes).
* *enable_dual_stack_networks* - Setting this to true will provision both IPv4 and IPv6 networking for pods and services. * *enable_dual_stack_networks* - Setting this to true will provision both IPv4 and IPv6 networking for pods and services.
* *kube_service_addresses_ipv6* - Subnet for cluster IPv6 IPs (default is ``fd85:ee78:d8a6:8607::1000/116``). Must not overlap with ``kube_pods_subnet_ipv6``. * *kube_service_addresses_ipv6* - Subnet for cluster IPv6 IPs (default is ``fd85:ee78:d8a6:8607::1000/116``). Must not overlap with ``kube_pods_subnet_ipv6``.
@ -99,7 +99,7 @@ following default cluster parameters:
* *coredns_k8s_external_zone* - Zone that will be used when CoreDNS k8s_external plugin is enabled * *coredns_k8s_external_zone* - Zone that will be used when CoreDNS k8s_external plugin is enabled
(default is k8s_external.local) (default is k8s_external.local)
* *enable_coredns_k8s_endpoint_pod_names* - If enabled, it configures endpoint_pod_names option for kubernetes plugin. * *enable_coredns_k8s_endpoint_pod_names* - If enabled, it configures endpoint_pod_names option for kubernetes plugin.
on the CoreDNS service. on the CoreDNS service.