Improve markdownlint for contrib/network-storage (#7079)

This fixes markdownlint failures under contrib/network-storage and
contrib/vault.
This commit is contained in:
Kenichi Omichi 2020-12-23 00:00:26 -08:00 committed by GitHub
parent 1347bb2e4b
commit 5b5726bdd4
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
4 changed files with 29 additions and 25 deletions

View file

@ -67,7 +67,7 @@ markdownlint:
- npm install -g markdownlint-cli@0.22.0
script:
# TODO: Remove "grep -v" part to enable markdownlint for all md files
- markdownlint $(find . -name "*.md" | grep -v .github | grep -v roles | grep -v contrib/terraform | grep -v contrib/vault | grep -v contrib/network-storage) --ignore docs/_sidebar.md --ignore contrib/dind/README.md
- markdownlint $(find . -name "*.md" | grep -v .github | grep -v roles | grep -v contrib/terraform) --ignore docs/_sidebar.md --ignore contrib/dind/README.md
ci-matrix:
stage: unit-tests

View file

@ -8,19 +8,19 @@ In the same directory of this ReadMe file you should find a file named `inventor
Change that file to reflect your local setup (adding more machines or removing them and setting the adequate ip numbers), and save it to `inventory/sample/k8s_gfs_inventory`. Make sure that the settings on `inventory/sample/group_vars/all.yml` make sense with your deployment. Then execute change to the kubespray root folder, and execute (supposing that the machines are all using ubuntu):
```
```shell
ansible-playbook -b --become-user=root -i inventory/sample/k8s_gfs_inventory --user=ubuntu ./cluster.yml
```
This will provision your Kubernetes cluster. Then, to provision and configure the GlusterFS cluster, from the same directory execute:
```
```shell
ansible-playbook -b --become-user=root -i inventory/sample/k8s_gfs_inventory --user=ubuntu ./contrib/network-storage/glusterfs/glusterfs.yml
```
If your machines are not using Ubuntu, you need to change the `--user=ubuntu` to the correct user. Alternatively, if your Kubernetes machines are using one OS and your GlusterFS a different one, you can instead specify the `ansible_ssh_user=<correct-user>` variable in the inventory file that you just created, for each machine/VM:
```
```shell
k8s-master-1 ansible_ssh_host=192.168.0.147 ip=192.168.0.147 ansible_ssh_user=core
k8s-master-node-1 ansible_ssh_host=192.168.0.148 ip=192.168.0.148 ansible_ssh_user=core
k8s-master-node-2 ansible_ssh_host=192.168.0.146 ip=192.168.0.146 ansible_ssh_user=core
@ -30,7 +30,7 @@ k8s-master-node-2 ansible_ssh_host=192.168.0.146 ip=192.168.0.146 ansible_ssh_us
First step is to fill in a `my-kubespray-gluster-cluster.tfvars` file with the specification desired for your cluster. An example with all required variables would look like:
```
```ini
cluster_name = "cluster1"
number_of_k8s_masters = "1"
number_of_k8s_masters_no_floating_ip = "2"
@ -39,7 +39,7 @@ number_of_k8s_nodes = "0"
public_key_path = "~/.ssh/my-desired-key.pub"
image = "Ubuntu 16.04"
ssh_user = "ubuntu"
flavor_k8s_node = "node-flavor-id-in-your-openstack"
flavor_k8s_node = "node-flavor-id-in-your-openstack"
flavor_k8s_master = "master-flavor-id-in-your-openstack"
network_name = "k8s-network"
floatingip_pool = "net_external"
@ -54,7 +54,7 @@ ssh_user_gfs = "ubuntu"
As explained in the general terraform/openstack guide, you need to source your OpenStack credentials file, add your ssh-key to the ssh-agent and setup environment variables for terraform:
```
```shell
$ source ~/.stackrc
$ eval $(ssh-agent -s)
$ ssh-add ~/.ssh/my-desired-key
@ -67,7 +67,7 @@ $ echo Setting up Terraform creds && \
Then, standing on the kubespray directory (root base of the Git checkout), issue the following terraform command to create the VMs for the cluster:
```
```shell
terraform apply -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-kubespray-gluster-cluster.tfvars contrib/terraform/openstack
```
@ -75,18 +75,18 @@ This will create both your Kubernetes and Gluster VMs. Make sure that the ansibl
Then, provision your Kubernetes (kubespray) cluster with the following ansible call:
```
```shell
ansible-playbook -b --become-user=root -i contrib/terraform/openstack/hosts ./cluster.yml
```
Finally, provision the glusterfs nodes and add the Persistent Volume setup for GlusterFS in Kubernetes through the following ansible call:
```
```shell
ansible-playbook -b --become-user=root -i contrib/terraform/openstack/hosts ./contrib/network-storage/glusterfs/glusterfs.yml
```
If you need to destroy the cluster, you can run:
```
```shell
terraform destroy -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-kubespray-gluster-cluster.tfvars contrib/terraform/openstack
```

View file

@ -1,22 +1,26 @@
# Deploy Heketi/Glusterfs into Kubespray/Kubernetes
This playbook aims to automate [this](https://github.com/heketi/heketi/blob/master/docs/admin/install-kubernetes.md) tutorial. It deploys heketi/glusterfs into kubernetes and sets up a storageclass.
## Important notice
> Due to resource limits on the current project maintainers and general lack of contributions we are considering placing Heketi into a near-maintenance mode.[...]
https://github.com/heketi/heketi#important-notice
> Due to resource limits on the current project maintainers and general lack of contributions we are considering placing Heketi into a [near-maintenance mode](https://github.com/heketi/heketi#important-notice)
## Client Setup
Heketi provides a CLI that provides users with a means to administer the deployment and configuration of GlusterFS in Kubernetes. [Download and install the heketi-cli](https://github.com/heketi/heketi/releases) on your client machine.
## Install
Copy the inventory.yml.sample over to inventory/sample/k8s_heketi_inventory.yml and change it according to your setup.
```
```shell
ansible-playbook --ask-become -i inventory/sample/k8s_heketi_inventory.yml contrib/network-storage/heketi/heketi.yml
```
## Tear down
```
```shell
ansible-playbook --ask-become -i inventory/sample/k8s_heketi_inventory.yml contrib/network-storage/heketi/heketi-tear-down.yml
```

View file

@ -1,9 +1,9 @@
# /!\ The vault role have been retired from the main playbook.
# This role probably requires a LOT of changes in order to work again
Hashicorp Vault Role
====================
The vault role have been retired from the main playbook.
This role probably requires a LOT of changes in order to work again
Overview
--------
@ -22,7 +22,7 @@ role can generate certs for itself as well. Eventually, this may be improved
to allow alternate backends (such as Consul), but currently the tasks are
hardcoded to only create a Vault role for Etcd.
2. Cluster
1. Cluster
This step is where the long-term Vault cluster is started and configured. Its
first task, is to stop any temporary instances of Vault, to free the port for
@ -81,18 +81,18 @@ generated elsewhere, you'll need to copy the certificate and key to the hosts in
Additional Notes:
- ``groups.vault|first`` is considered the source of truth for Vault variables
- ``vault_leader_url`` is used as pointer for the current running Vault
- Each service should have its own role and credentials. Currently those
* ``groups.vault|first`` is considered the source of truth for Vault variables
* ``vault_leader_url`` is used as pointer for the current running Vault
* Each service should have its own role and credentials. Currently those
credentials are saved to ``/etc/vault/roles/<role>/``. The service will
need to read in those credentials, if they want to interact with Vault.
Potential Work
--------------
- Change the Vault role to not run certain tasks when ``root_token`` and
* Change the Vault role to not run certain tasks when ``root_token`` and
``unseal_keys`` are not present. Alternatively, allow user input for these
values when missing.
- Add the ability to start temp Vault with Host or Docker
- Add a dynamic way to change out the backend role creation during Bootstrap,
* Add the ability to start temp Vault with Host or Docker
* Add a dynamic way to change out the backend role creation during Bootstrap,
so other services can be used (such as Consul)