c12s-kubespray/roles/kubernetes-apps/external_provisioner/rbd_provisioner
Kenichi Omichi 486b223e01
Replace kube-master with kube_control_plane (#7256)
This replaces kube-master with kube_control_plane because of [1]:

  The Kubernetes project is moving away from wording that is
  considered offensive. A new working group WG Naming was created
  to track this work, and the word "master" was declared as offensive.
  A proposal was formalized for replacing the word "master" with
  "control plane". This means it should be removed from source code,
  documentation, and user-facing configuration from Kubernetes and
  its sub-projects.

NOTE: The reason why this changes it to kube_control_plane not
      kube-control-plane is for valid group names on ansible.

[1]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/kubeadm/2067-rename-master-label-taint/README.md#motivation
2021-03-23 17:26:05 -07:00
..
defaults Update main.yml (#5166) 2019-09-17 05:36:24 -07:00
tasks Replace kube-master with kube_control_plane (#7256) 2021-03-23 17:26:05 -07:00
templates Replace seccomp profile docker/default with runtime/default (#6170) 2020-05-27 14:02:02 -07:00
README.md Fix markdownlint failures under ./roles/ (#7089) 2020-12-30 05:07:49 -08:00

RBD Volume Provisioner for Kubernetes 1.5+

rbd-provisioner is an out-of-tree dynamic provisioner for Kubernetes 1.5+. You can use it quickly & easily deploy ceph RBD storage that works almost anywhere.

It works just like in-tree dynamic provisioner. For more information on how dynamic provisioning works, see the docs or this blog post.

Development

Compile the provisioner

make

Make the container image and push to the registry

make push

Test instruction

  • Start Kubernetes local cluster

See Kubernetes.

  • Create a Ceph admin secret
ceph auth get client.admin 2>&1 |grep "key = " |awk '{print  $3'} |xargs echo -n > /tmp/secret
kubectl create secret generic ceph-admin-secret --from-file=/tmp/secret --namespace=kube-system
  • Create a Ceph pool and a user secret
ceph osd pool create kube 8 8
ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=kube'
ceph auth get-key client.kube > /tmp/secret
kubectl create secret generic ceph-secret --from-file=/tmp/secret --namespace=kube-system
  • Start RBD provisioner

The following example uses rbd-provisioner-1 as the identity for the instance and assumes kubeconfig is at /root/.kube. The identity should remain the same if the provisioner restarts. If there are multiple provisioners, each should have a different identity.

docker run -ti -v /root/.kube:/kube -v /var/run/kubernetes:/var/run/kubernetes --privileged --net=host quay.io/external_storage/rbd-provisioner /usr/local/bin/rbd-provisioner -master=http://127.0.0.1:8080 -kubeconfig=/kube/config -id=rbd-provisioner-1

Alternatively, deploy it in kubernetes, see deployment.

  • Create a RBD Storage Class

Replace Ceph monitor's IP in examples/class.yaml with your own and create storage class:

kubectl create -f examples/class.yaml
  • Create a claim
kubectl create -f examples/claim.yaml
  • Create a Pod using the claim
kubectl create -f examples/test-pod.yaml

Acknowledgements

  • This provisioner is extracted from Kubernetes core with some modifications for this project.