c12s-kubespray/roles/kubernetes-apps/external_provisioner/local_volume_provisioner
2020-04-14 13:14:04 -07:00
..
defaults Upgrade local volume provisioner to v2.3.2 (#4983) 2019-07-16 05:27:26 -07:00
tasks Upgrade local volume provisioner to v2.3.2 (#4983) 2019-07-16 05:27:26 -07:00
templates update rbac.authorization.k8s.io to non deprecated api-groups (#5517) 2020-04-14 13:14:04 -07:00
README.md added "Flatcar", "Flatcar Container Linux by Kinvolk" for all coreOS role (#5607) 2020-02-18 00:15:29 -08:00

Local Storage Provisioner

The local storage provisioner is NOT a dynamic storage provisioner as you would expect from a cloud provider. Instead, it simply creates PersistentVolumes for all mounts under the host_dir of the specified storage class. These storage classes are specified in the local_volume_provisioner_storage_classes nested dictionary. Example:

local_volume_provisioner_storage_classes:
  local-storage:
    host_dir: /mnt/disks
    mount_dir: /mnt/disks
  fast-disks:
    host_dir: /mnt/fast-disks
    mount_dir: /mnt/fast-disks
    block_cleaner_command:
       - "/scripts/shred.sh"
       - "2"
    volume_mode: Filesystem
    fs_type: ext4

For each key in local_volume_provisioner_storage_classes a storageClass with the same name is created. The subkeys of each storage class are converted to camelCase and added as attributes to the storageClass. The result of the above example is:

data:
  storageClassMap: |
    local-storage:
      hostDir: /mnt/disks
      mountDir: /mnt/disks
    fast-disks:
      hostDir: /mnt/fast-disks
      mountDir:  /mnt/fast-disks
      blockCleanerCommand:
        - "/scripts/shred.sh"
        - "2"
      volumeMode: Filesystem
      fsType: ext4    

The default StorageClass is local-storage on /mnt/disks, the rest of this doc will use that path as an example.

Examples to create local storage volumes

tmpfs method:

for vol in vol1 vol2 vol3; do
mkdir /mnt/disks/$vol
mount -t tmpfs -o size=5G $vol /mnt/disks/$vol
done

The tmpfs method is not recommended for production because the mount is not persistent and data will be deleted on reboot.

Mount physical disks

mkdir /mnt/disks/ssd1
mount /dev/vdb1 /mnt/disks/ssd1

Physical disks are recommended for production environments because it offers complete isolation in terms of I/O and capacity.

Mount unpartitioned physical devices

for disk in /dev/sdc /dev/sdd /dev/sde; do
  ln -s $disk /mnt/disks
done

This saves time of precreatnig filesystems. Note that your storageclass must have volume_mode set to "Filesystem" and fs_type defined. If either is not set, the disk will be added as a raw block device.

File-backed sparsefile method

truncate /mnt/disks/disk5 --size 2G
mkfs.ext4 /mnt/disks/disk5
mkdir /mnt/disks/vol5
mount /mnt/disks/disk5 /mnt/disks/vol5

If you have a development environment and only one disk, this is the best way to limit the quota of persistent volumes.

Simple directories

In a development environment using mount --bind works also, but there is no capacity management.

Block volumeMode PVs

Create a symbolic link under discovery directory to the block device on the node. To use raw block devices in pods, volume_type should be set to "Block".

Usage notes

Beta PV.NodeAffinity field is used by default. If running against an older K8s version, the useAlphaAPI flag must be set in the configMap.

The volume provisioner cannot calculate volume sizes correctly, so you should delete the daemonset pod on the relevant host after creating volumes. The pod will be recreated and read the size correctly.

Make sure to make any mounts persist via /etc/fstab or with systemd mounts (for CoreOS/Container Linux and Flatcar). Pods with persistent volume claims will not be able to start if the mounts become unavailable.

Further reading

Refer to the upstream docs here: https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume