728024e8ff
- cephfs-provisioner 06fddbe2 (https://github.com/kubernetes-incubator/external-storage/tree/06fddbe2/ceph/cephfs) Noteable changes from upstream: - Added storage class parameters to specify a root path within the backing cephfs and, optionally, use deterministic directory and user names (https://github.com/kubernetes-incubator/external-storage/pull/696) - Support capacity (https://github.com/kubernetes-incubator/external-storage/pull/770) - Enable metrics server (https://github.com/kubernetes-incubator/external-storage/pull/797) Other noteable changes: - Clean up legacy manifests file naming - Remove legacy manifests, namespace and storageclass before upgrade - `cephfs_provisioner_monitors` simplified as string - Default to new deterministic naming - Add `reclaimPolicy` support in StorageClass With legacy non-deterministic naming style (where $UUID are generated ramdonly): - cephfs_provisioner_claim_root: /volumes/kubernetes - cephfs_provisioner_deterministic_names: false - Generated CephFS volume: /volumes/kubernetes/kubernetes-dynamic-pvc-$UUID - Generated CephFS user: kubernetes-dynamic-user-$UUID With new default deterministic naming style (where $NAMESPACE and $PVC are predictable): - cephfs_provisioner_claim_root: /volumes - cephfs_provisioner_deterministic_names: true - Generated CephFS volume: /volumes/$NAMESPACE/$PVC - Generated CephFS user: k8s.$NAMESPACE.$PVC |
||
---|---|---|
.. | ||
defaults | ||
tasks | ||
templates | ||
README.md |
CephFS Volume Provisioner for Kubernetes 1.5+
Using Ceph volume client
Development
Compile the provisioner
make
Make the container image and push to the registry
make push
Test instruction
- Start Kubernetes local cluster
- Create a Ceph admin secret
ceph auth get client.admin 2>&1 |grep "key = " |awk '{print $3'} |xargs echo -n > /tmp/secret
kubectl create ns cephfs
kubectl create secret generic ceph-secret-admin --from-file=/tmp/secret --namespace=cephfs
- Start CephFS provisioner
The following example uses cephfs-provisioner-1
as the identity for the instance and assumes kubeconfig is at /root/.kube
. The identity should remain the same if the provisioner restarts. If there are multiple provisioners, each should have a different identity.
docker run -ti -v /root/.kube:/kube -v /var/run/kubernetes:/var/run/kubernetes --privileged --net=host cephfs-provisioner /usr/local/bin/cephfs-provisioner -master=http://127.0.0.1:8080 -kubeconfig=/kube/config -id=cephfs-provisioner-1
Alternatively, deploy it in kubernetes, see deployment.
- Create a CephFS Storage Class
Replace Ceph monitor's IP in example/class.yaml with your own and create storage class:
kubectl create -f example/class.yaml
- Create a claim
kubectl create -f example/claim.yaml
- Create a Pod using the claim
kubectl create -f example/test-pod.yaml
Known limitations
- Kernel CephFS doesn't work with SELinux, setting SELinux label in Pod's securityContext will not work.
- Kernel CephFS doesn't support quota or capacity, capacity requested by PVC is not enforced or validated.
- Currently each Ceph user created by the provisioner has
allow r
MDS cap to permit CephFS mount.
Acknowledgement
Inspired by CephFS Manila provisioner and conversation with John Spray