Improve markdownlint coverage (#7075)

Now markdownlint covers ./README.md and md files under ./docs only.
However we have a lot of md files under different directories also.
This enables markdownlint for other md files also.
This commit is contained in:
Kenichi Omichi 2020-12-22 04:44:26 -08:00 committed by GitHub
parent 286191ecb7
commit 1347bb2e4b
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
5 changed files with 26 additions and 28 deletions

View file

@ -64,9 +64,10 @@ markdownlint:
tags: [light] tags: [light]
image: node image: node
before_script: before_script:
- npm install -g markdownlint-cli - npm install -g markdownlint-cli@0.22.0
script: script:
- markdownlint README.md docs --ignore docs/_sidebar.md # TODO: Remove "grep -v" part to enable markdownlint for all md files
- markdownlint $(find . -name "*.md" | grep -v .github | grep -v roles | grep -v contrib/terraform | grep -v contrib/vault | grep -v contrib/network-storage) --ignore docs/_sidebar.md --ignore contrib/dind/README.md
ci-matrix: ci-matrix:
stage: unit-tests stage: unit-tests

View file

@ -24,14 +24,14 @@ experience.
You can enable the use of a Bastion Host by changing **use_bastion** in group_vars/all to **true**. The generated You can enable the use of a Bastion Host by changing **use_bastion** in group_vars/all to **true**. The generated
templates will then include an additional bastion VM which can then be used to connect to the masters and nodes. The option templates will then include an additional bastion VM which can then be used to connect to the masters and nodes. The option
also removes all public IPs from all other VMs. also removes all public IPs from all other VMs.
## Generating and applying ## Generating and applying
To generate and apply the templates, call: To generate and apply the templates, call:
```shell ```shell
$ ./apply-rg.sh <resource_group_name> ./apply-rg.sh <resource_group_name>
``` ```
If you change something in the configuration (e.g. number of nodes) later, you can call this again and Azure will If you change something in the configuration (e.g. number of nodes) later, you can call this again and Azure will
@ -42,25 +42,23 @@ take care about creating/modifying whatever is needed.
If you need to delete all resources from a resource group, simply call: If you need to delete all resources from a resource group, simply call:
```shell ```shell
$ ./clear-rg.sh <resource_group_name> ./clear-rg.sh <resource_group_name>
``` ```
**WARNING** this really deletes everything from your resource group, including everything that was later created by you! **WARNING** this really deletes everything from your resource group, including everything that was later created by you!
## Generating an inventory for kubespray ## Generating an inventory for kubespray
After you have applied the templates, you can generate an inventory with this call: After you have applied the templates, you can generate an inventory with this call:
```shell ```shell
$ ./generate-inventory.sh <resource_group_name> ./generate-inventory.sh <resource_group_name>
``` ```
It will create the file ./inventory which can then be used with kubespray, e.g.: It will create the file ./inventory which can then be used with kubespray, e.g.:
```shell ```shell
$ cd kubespray-root-dir cd kubespray-root-dir
$ sudo pip3 install -r requirements.txt sudo pip3 install -r requirements.txt
$ ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/sample/group_vars/all/all.yml" cluster.yml ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/sample/group_vars/all/all.yml" cluster.yml
``` ```

View file

@ -6,6 +6,7 @@ to serve as Kubernetes "nodes", which in turn will run
called DIND (Docker-IN-Docker). called DIND (Docker-IN-Docker).
The playbook has two roles: The playbook has two roles:
- dind-host: creates the "nodes" as containers in localhost, with - dind-host: creates the "nodes" as containers in localhost, with
appropriate settings for DIND (privileged, volume mapping for dind appropriate settings for DIND (privileged, volume mapping for dind
storage, etc). storage, etc).
@ -27,7 +28,7 @@ See below for a complete successful run:
1. Create the node containers 1. Create the node containers
~~~~ ```shell
# From the kubespray root dir # From the kubespray root dir
cd contrib/dind cd contrib/dind
pip install -r requirements.txt pip install -r requirements.txt
@ -36,15 +37,15 @@ ansible-playbook -i hosts dind-cluster.yaml
# Back to kubespray root # Back to kubespray root
cd ../.. cd ../..
~~~~ ```
NOTE: if the playbook run fails with something like below error NOTE: if the playbook run fails with something like below error
message, you may need to specifically set `ansible_python_interpreter`, message, you may need to specifically set `ansible_python_interpreter`,
see `./hosts` file for an example expanded localhost entry. see `./hosts` file for an example expanded localhost entry.
~~~ ```shell
failed: [localhost] (item=kube-node1) => {"changed": false, "item": "kube-node1", "msg": "Failed to import docker or docker-py - No module named requests.exceptions. Try `pip install docker` or `pip install docker-py` (Python 2.6)"} failed: [localhost] (item=kube-node1) => {"changed": false, "item": "kube-node1", "msg": "Failed to import docker or docker-py - No module named requests.exceptions. Try `pip install docker` or `pip install docker-py` (Python 2.6)"}
~~~ ```
2. Customize kubespray-dind.yaml 2. Customize kubespray-dind.yaml
@ -52,33 +53,33 @@ Note that there's coupling between above created node containers
and `kubespray-dind.yaml` settings, in particular regarding selected `node_distro` and `kubespray-dind.yaml` settings, in particular regarding selected `node_distro`
(as set in `group_vars/all/all.yaml`), and docker settings. (as set in `group_vars/all/all.yaml`), and docker settings.
~~~ ```shell
$EDITOR contrib/dind/kubespray-dind.yaml $EDITOR contrib/dind/kubespray-dind.yaml
~~~ ```
3. Prepare the inventory and run the playbook 3. Prepare the inventory and run the playbook
~~~ ```shell
INVENTORY_DIR=inventory/local-dind INVENTORY_DIR=inventory/local-dind
mkdir -p ${INVENTORY_DIR} mkdir -p ${INVENTORY_DIR}
rm -f ${INVENTORY_DIR}/hosts.ini rm -f ${INVENTORY_DIR}/hosts.ini
CONFIG_FILE=${INVENTORY_DIR}/hosts.ini /tmp/kubespray.dind.inventory_builder.sh CONFIG_FILE=${INVENTORY_DIR}/hosts.ini /tmp/kubespray.dind.inventory_builder.sh
ansible-playbook --become -e ansible_ssh_user=debian -i ${INVENTORY_DIR}/hosts.ini cluster.yml --extra-vars @contrib/dind/kubespray-dind.yaml ansible-playbook --become -e ansible_ssh_user=debian -i ${INVENTORY_DIR}/hosts.ini cluster.yml --extra-vars @contrib/dind/kubespray-dind.yaml
~~~ ```
NOTE: You could also test other distros without editing files by NOTE: You could also test other distros without editing files by
passing `--extra-vars` as per below commandline, passing `--extra-vars` as per below commandline,
replacing `DISTRO` by either `debian`, `ubuntu`, `centos`, `fedora`: replacing `DISTRO` by either `debian`, `ubuntu`, `centos`, `fedora`:
~~~ ```shell
cd contrib/dind cd contrib/dind
ansible-playbook -i hosts dind-cluster.yaml --extra-vars node_distro=DISTRO ansible-playbook -i hosts dind-cluster.yaml --extra-vars node_distro=DISTRO
cd ../.. cd ../..
CONFIG_FILE=inventory/local-dind/hosts.ini /tmp/kubespray.dind.inventory_builder.sh CONFIG_FILE=inventory/local-dind/hosts.ini /tmp/kubespray.dind.inventory_builder.sh
ansible-playbook --become -e ansible_ssh_user=DISTRO -i inventory/local-dind/hosts.ini cluster.yml --extra-vars @contrib/dind/kubespray-dind.yaml --extra-vars bootstrap_os=DISTRO ansible-playbook --become -e ansible_ssh_user=DISTRO -i inventory/local-dind/hosts.ini cluster.yml --extra-vars @contrib/dind/kubespray-dind.yaml --extra-vars bootstrap_os=DISTRO
~~~ ```
## Resulting deployment ## Resulting deployment
@ -89,7 +90,7 @@ from the host where you ran kubespray playbooks.
Running from an Ubuntu Xenial host: Running from an Ubuntu Xenial host:
~~~ ```shell
$ uname -a $ uname -a
Linux ip-xx-xx-xx-xx 4.4.0-1069-aws #79-Ubuntu SMP Mon Sep 24 Linux ip-xx-xx-xx-xx 4.4.0-1069-aws #79-Ubuntu SMP Mon Sep 24
15:01:41 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux 15:01:41 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
@ -149,14 +150,14 @@ kube-system weave-net-xr46t 2/2 Running 0
$ docker exec kube-node1 curl -s http://localhost:31081/api/v1/connectivity_check $ docker exec kube-node1 curl -s http://localhost:31081/api/v1/connectivity_check
{"Message":"All 10 pods successfully reported back to the server","Absent":null,"Outdated":null} {"Message":"All 10 pods successfully reported back to the server","Absent":null,"Outdated":null}
~~~ ```
## Using ./run-test-distros.sh ## Using ./run-test-distros.sh
You can use `./run-test-distros.sh` to run a set of tests via DIND, You can use `./run-test-distros.sh` to run a set of tests via DIND,
and excerpt from this script, to get an idea: and excerpt from this script, to get an idea:
~~~ ```shell
# The SPEC file(s) must have two arrays as e.g. # The SPEC file(s) must have two arrays as e.g.
# DISTROS=(debian centos) # DISTROS=(debian centos)
# EXTRAS=( # EXTRAS=(
@ -169,7 +170,7 @@ and excerpt from this script, to get an idea:
# #
# Each $EXTRAS element will be whitespace split, and passed as --extra-vars # Each $EXTRAS element will be whitespace split, and passed as --extra-vars
# to main kubespray ansible-playbook run. # to main kubespray ansible-playbook run.
~~~ ```
See e.g. `test-some_distros-most_CNIs.env` and See e.g. `test-some_distros-most_CNIs.env` and
`test-some_distros-kube_router_combo.env` in particular for a richer `test-some_distros-kube_router_combo.env` in particular for a richer

View file

@ -5,7 +5,7 @@ deployment on VMs.
This playbook does not create Virtual Machines, nor does it run Kubespray itself. This playbook does not create Virtual Machines, nor does it run Kubespray itself.
### User creation ## User creation
If you want to create a user for running Kubespray deployment, you should specify If you want to create a user for running Kubespray deployment, you should specify
both `k8s_deployment_user` and `k8s_deployment_user_pkey_path`. both `k8s_deployment_user` and `k8s_deployment_user_pkey_path`.

View file

@ -9,8 +9,6 @@ Ubuntu Trusty |[![Build Status](https://ci.kubespray.io/job/kubespray-aws-calico
RHEL 7.2 |[![Build Status](https://ci.kubespray.io/job/kubespray-aws-calico-rhel72/badge/icon)](https://ci.kubespray.io/job/kubespray-aws-calico-rhel72/)|[![Build Status](https://ci.kubespray.io/job/kubespray-aws-flannel-rhel72/badge/icon)](https://ci.kubespray.io/job/kubespray-aws-flannel-rhel72/)|[![Build Status](https://ci.kubespray.io/job/kubespray-aws-weave-rhel72/badge/icon)](https://ci.kubespray.io/job/kubespray-aws-weave-rhel72/)| RHEL 7.2 |[![Build Status](https://ci.kubespray.io/job/kubespray-aws-calico-rhel72/badge/icon)](https://ci.kubespray.io/job/kubespray-aws-calico-rhel72/)|[![Build Status](https://ci.kubespray.io/job/kubespray-aws-flannel-rhel72/badge/icon)](https://ci.kubespray.io/job/kubespray-aws-flannel-rhel72/)|[![Build Status](https://ci.kubespray.io/job/kubespray-aws-weave-rhel72/badge/icon)](https://ci.kubespray.io/job/kubespray-aws-weave-rhel72/)|
CentOS 7 |[![Build Status](https://ci.kubespray.io/job/kubespray-aws-calico-centos7/badge/icon)](https://ci.kubespray.io/job/kubespray-aws-calico-centos7/)|[![Build Status](https://ci.kubespray.io/job/kubespray-aws-flannel-centos7/badge/icon)](https://ci.kubespray.io/job/kubespray-aws-flannel-centos7/)|[![Build Status](https://ci.kubespray.io/job/kubespray-aws-weave-centos7/badge/icon)](https://ci.kubespray.io/job/kubespray-aws-weave-centos7/)| CentOS 7 |[![Build Status](https://ci.kubespray.io/job/kubespray-aws-calico-centos7/badge/icon)](https://ci.kubespray.io/job/kubespray-aws-calico-centos7/)|[![Build Status](https://ci.kubespray.io/job/kubespray-aws-flannel-centos7/badge/icon)](https://ci.kubespray.io/job/kubespray-aws-flannel-centos7/)|[![Build Status](https://ci.kubespray.io/job/kubespray-aws-weave-centos7/badge/icon)](https://ci.kubespray.io/job/kubespray-aws-weave-centos7/)|
## Test environment variables ## Test environment variables
### Common ### Common