c12s-kubespray/roles/kubernetes-apps/external_provisioner/cephfs_provisioner
Kenichi Omichi 486b223e01
Replace kube-master with kube_control_plane (#7256)
This replaces kube-master with kube_control_plane because of [1]:

  The Kubernetes project is moving away from wording that is
  considered offensive. A new working group WG Naming was created
  to track this work, and the word "master" was declared as offensive.
  A proposal was formalized for replacing the word "master" with
  "control plane". This means it should be removed from source code,
  documentation, and user-facing configuration from Kubernetes and
  its sub-projects.

NOTE: The reason why this changes it to kube_control_plane not
      kube-control-plane is for valid group names on ansible.

[1]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/kubeadm/2067-rename-master-label-taint/README.md#motivation
2021-03-23 17:26:05 -07:00
..
defaults cephfs-provisioner: Upgrade to 06fddbe2 2018-07-03 10:15:24 +08:00
tasks Replace kube-master with kube_control_plane (#7256) 2021-03-23 17:26:05 -07:00
templates Replace seccomp profile docker/default with runtime/default (#6170) 2020-05-27 14:02:02 -07:00
README.md Fix markdownlint failures under ./roles/ (#7089) 2020-12-30 05:07:49 -08:00

CephFS Volume Provisioner for Kubernetes 1.5+

Docker Repository on Quay

Using Ceph volume client

Development

Compile the provisioner

make

Make the container image and push to the registry

make push

Test instruction

  • Start Kubernetes local cluster

See Kubernetes

  • Create a Ceph admin secret
ceph auth get client.admin 2>&1 |grep "key = " |awk '{print  $3'} |xargs echo -n > /tmp/secret
kubectl create ns cephfs
kubectl create secret generic ceph-secret-admin --from-file=/tmp/secret --namespace=cephfs
  • Start CephFS provisioner

The following example uses cephfs-provisioner-1 as the identity for the instance and assumes kubeconfig is at /root/.kube. The identity should remain the same if the provisioner restarts. If there are multiple provisioners, each should have a different identity.

docker run -ti -v /root/.kube:/kube -v /var/run/kubernetes:/var/run/kubernetes --privileged --net=host cephfs-provisioner /usr/local/bin/cephfs-provisioner -master=http://127.0.0.1:8080 -kubeconfig=/kube/config -id=cephfs-provisioner-1

Alternatively, deploy it in kubernetes, see deployment.

  • Create a CephFS Storage Class

Replace Ceph monitor's IP in example class with your own and create storage class:

kubectl create -f example/class.yaml
  • Create a claim
kubectl create -f example/claim.yaml
  • Create a Pod using the claim
kubectl create -f example/test-pod.yaml

Known limitations

  • Kernel CephFS doesn't work with SELinux, setting SELinux label in Pod's securityContext will not work.
  • Kernel CephFS doesn't support quota or capacity, capacity requested by PVC is not enforced or validated.
  • Currently each Ceph user created by the provisioner has allow r MDS cap to permit CephFS mount.

Acknowledgement

Inspired by CephFS Manila provisioner and conversation with John Spray