Environment:
Kubernetes master: 192.168.174.110
Kubernetes work-node-01: 192.168.174.125
Kubernetes work-node-02: 192.168.174.126
Kubernetes version: v1.31.3
Installation method: kubeadm init
Host OS: Ubuntu 22.04 LTS
CNI and version: calico v3.29.0
CRI and version: containerd://2.0.0
My custom storageclass.yaml file contents
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
annotations:
storageclass.kubernetes.io/is-default-class: “true”
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
My custom pv.yaml file contents
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv-local-1gi
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /download/test-volume/
nodeAffinity:
required:
nodeSelectorTerms:- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:- k8s-node-02
- k8s-node-01
- key: kubernetes.io/hostname
- matchExpressions:
My custom pvc.yaml file contents
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-pvc-local-1gi-claim
namespace: test-k8s
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: local-storage
My custom deployment-5.yaml file contents
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: deployment-volume-test-05
name: deployment-volume-test-05
namespace: test-k8s
spec:
selector:
matchLabels:
test-app: pod-volume-test-05
template:
metadata:
labels:
test-app: pod-volume-test-05
spec:
containers:
- image: busybox:latest
name: test-busybox-05
command: [“/bin/sh”, “-c”, “sleep 12h”]
volumeMounts:- name: example-volume
mountPath: /data
readOnly: false
- name: example-volume
- image: nginx:latest
name: test-nginx-05
ports:- containerPort: 80
volumeMounts: - name: example-volume
mountPath: /data
readOnly: false
volumes:
- containerPort: 80
- name: example-volume
persistentVolumeClaim:
claimName: example-pvc-local-1gi-claim
When I manually delete pvc with kubectl delete, my own understanding is that pv will automatically reclaim and clear all data in /download/test-volume/ directory (linux local file system). Why does kubectl describe persistentvolume/example-pv-local-1gi show the status of pv as Failed, see details
kubectl get storageclass,pv,pvc -n test-k8s
kubectl describe persistentvolume/example-pv-local-1gi
error getting deleter volume plugin for volume “example-pv-local-1gi”: no volume plugin matched
Does the local data volume not support the " Reclaim Policy: Delete " policy?
What else do I need to do if the feature is supported?