Cluster information:
Kubernetes version: 1.22
Problem
When using Local Block Volume, we may mistakenly assign an used device. For example, when we use a local block volume as follow
apiVersion: v1
kind: PersistentVolume
metadata:
name: support-es1-1-1-block-0
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 3500Gi
local:
path: /dev/nvme2n1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- 10.28.0.172
persistentVolumeReclaimPolicy: Delete
If someone already used the device /dev/nvme2n1
, made its filesystem and mounted this device with some path like /data
. Then some POD uses a PVC which is bounded to this PV. The POD will directly write some data to this block device. This case will cause some damage to filesystem.
Solution
I propose some check should be added here. In Local Volume Setup code, now the code is shown as following.
// SetUpDevice prepares the volume to the node by the plugin specific way.
func (m *localVolumeMapper) SetUpDevice() (string, error) {
return "", nil
}
We should add MountPoint check in this function as what we do in Filesystem Volume
// SetUpAt bind mounts the directory to the volume path and sets up volume ownership
func (m *localVolumeMounter) SetUpAt(dir string, mounterArgs volume.MounterArgs) error {
...
notMnt, err := mount.IsNotMountPoint(m.mounter, dir)
klog.V(4).Infof("LocalVolume mount setup: PodDir(%s) VolDir(%s) Mounted(%t) Error(%v), ReadOnly(%t)", dir, m.globalPath, !notMnt, err, m.readOnly)
if err != nil && !os.IsNotExist(err) {
klog.Errorf("cannot validate mount point: %s %v", dir, err)
return err
}
if !notMnt {
return nil
}
...