Plugin for pod "app-5..": could not find v1.PersistentVolume "pvc-fb..."

I am having issues connecting my app to the persistent volume on my local MacOS Minikube.

Cluster information:

Kubernetes 1.19.4:
kubectl version -o json

{
  "clientVersion": {
    "major": "1",
    "minor": "19",
    "gitVersion": "v1.19.4",
    "gitCommit": "d360454c9bcd1634cf4cc52d1867af5491dc9c5f",
    "gitTreeState": "clean",
    "buildDate": "2020-11-12T01:09:16Z",
    "goVersion": "go1.15.4",
    "compiler": "gc",
    "platform": "darwin/amd64"
  },
  "serverVersion": {
    "major": "1",
    "minor": "19",
    "gitVersion": "v1.19.4",
    "gitCommit": "d360454c9bcd1634cf4cc52d1867af5491dc9c5f",
    "gitTreeState": "clean",
    "buildDate": "2020-11-11T13:09:17Z",
    "goVersion": "go1.15.2",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}

Installation method:
Host OS: macOS 10.15.7

I checked one of the pods first:

kubectl describe pod app-5b5c586456-7n4wh -n smt-local 
Name:           app-5b5c586456-7n4wh
Namespace:      smt-local
Priority:       0
Node:           <none>
Labels:         app=web
                pod-template-hash=5b5c586456
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/app-5b5c586456
Containers:
  laravel:
    Image:      smart48/smt-laravel:latest
    Port:       9000/TCP
    Host Port:  0/TCP
    Limits:
      cpu:  500m
    Requests:
      cpu:        250m
    Environment:  <none>
    Mounts:
      /data/smtapp from code-storage (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-cp4nw (ro)
  nginx:
    Image:        smart48/smt-nginx:latest
    Port:         9376/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:
      /data/nginx/config from nginx-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-cp4nw (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  code-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  code-pv-claim
    ReadOnly:   false
  nginx-config:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  code-pv-claim
    ReadOnly:   false
  default-token-cp4nw:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-cp4nw
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  47s (x19 over 22m)  default-scheduler  running "VolumeBinding" filter plugin for pod "app-5b5c586456-7n4wh": could not find v1.PersistentVolume "pvc-fb51a73b-9a33-4ccd-8552-27b887ff145e"

and saw that the persistent volume could not be found for

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
  namespace: smt-local
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: laravel
          image: smart48/smt-laravel:latest
          ports:
            - containerPort: 9000
          resources:
            requests:
              cpu: 250m
            limits:
              cpu: 500m
          volumeMounts:
          - name: code-storage
            mountPath: /data/smtapp
          # examples
          # - name: orientdb-config
          #   mountPath: /data/orientdb/config
          # - name: orientdb-databases
          #   mountPath: /data/orientdb/databases 
          # - name: orientdb-backup
          #   mountPath: /data/orientdb/backup
        - name: nginx
          image: smart48/smt-nginx:latest
          ports:
            - containerPort: 9376
          volumeMounts:
          # example
          - name: nginx-config
            mountPath: /data/nginx/config
      volumes:
        - name: code-storage
          persistentVolumeClaim:
            claimName: code-pv-claim
        - name: nginx-config
          persistentVolumeClaim:
            claimName: code-pv-claim
        # examples
        # - name: orientdb-config
        #   persistentVolumeClaim:
        #     claimName: orientdb-pv-claim
        # - name: orientdb-databases
        #   persistentVolumeClaim:
        #     claimName: orientdb-pv-claim
        # - name: orientdb-backup
        #   persistentVolumeClaim:
        #     claimName: orientdb-pv-claim

I then checked the persistent volumes:

➜  smt-deploy git:(main) ✗ kubectl get pvc -n smt-local
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
code-pv-claim    Lost     pvc-fb51a73b-9a33-4ccd-8552-27b887ff145e   0                         standard       21h
mysql-pv-claim   Lost     pvc-2c47dbca-f236-4962-a67c-b50f7a2e9cef   0                         standard       22h
redis-pv-claim   Bound    pvc-f7329ae0-1e03-4e16-8e1e-7ebc3de9b591   5Gi        RWO            standard       29m

And saw two of them I just this morning set up again after I removed all yesterday do list but show up with status lost.

My pvc.yaml is

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: code-pv-claim
  namespace: smt-local
  labels:
    type: code
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi

and my persistent volume pv.yaml is

apiVersion: v1
kind: PersistentVolume
metadata:
  name: code-pv
  namespace: smt-local
spec:
  accessModes:
      - ReadWriteOnce
  capacity:
    storage: 200Mi
  hostPath:
    # minikube is configured to persist files stored under /data/ and a few 
    # other directories such as /var/lib/minikube in the _vm_
    path: /data

See all here https://github.com/smart48/smt-deploy/tree/main/local as well by the way.

That we see

kubectl get pvc -n smt-local          
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
code-pv-claim    Lost     pvc-fb51a73b-9a33-4ccd-8552-27b887ff145e   0                         standard       21h
mysql-pv-claim   Lost     pvc-2c47dbca-f236-4962-a67c-b50f7a2e9cef   0                         standard       22h
redis-pv-claim   Bound    pvc-f7329ae0-1e03-4e16-8e1e-7ebc3de9b591   5Gi        RWO            standard       50m

with two claims lost is because yesterday I did remove the volumes completely but today I ran the files again to create them so why does that just not happen?

What am I missing here? How can I make the application load the volume properly on my Minikube?

This is all rather painful with Kubernetes I must say. I decided to rename the Persistent Volume Claims for the code and the MySQl database with a line like name: mysql-pv-claim-v2. This based upon reading this Humblec blog post. Now I see these two new PVC getting bound:

kubectl get pvc -n smt-local                    
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
code-pv-claim       Lost     pvc-fb51a73b-9a33-4ccd-8552-27b887ff145e   0                         standard       22h
code-pv-claim-v2    Bound    pvc-b72cf47d-ca11-4248-8fc9-c5c1167e21d4   100Mi      RWO            standard       60s
mysql-pv-claim      Lost     pvc-2c47dbca-f236-4962-a67c-b50f7a2e9cef   0                         standard       22h
mysql-pv-claim-v2   Bound    pvc-cc8d295c-4bc6-4ab4-bfc8-7f62cce487e7   5Gi        RWO            standard       5s
redis-pv-claim      Bound    pvc-f7329ae0-1e03-4e16-8e1e-7ebc3de9b591   5Gi        RWO            standard       60m

But I am still stuck with claims that are out there and that are “lost”. How can I get rid of these? This:

 kubectl delete pvc pvc-fb51a73b-9a33-4ccd-8552-27b887ff145e -n smt-local 
Error from server (NotFound): persistentvolumeclaims "pvc-fb51a73b-9a33-4ccd-8552-27b887ff145e" not found

Did not work.

And two, was there no better way with these persistent volumes as they were not even really in use yet?

I found this SO thread https://stackoverflow.com/questions/53202727/how-to-delete-only-unmounted-pvcs-and-pvs and this suggestion to remove persistent volume claims:

kubectl -n <namespace> get pvc | tail -n +2 | grep -v Bound | \
  awk '{print $1}' | xargs -I{} kubectl -n namespace delete pvc {}

so did a

kubectl -n smt-local get pvc | tail -n +2 | grep -v Bound | \
  awk '{print $1}' | xargs -I{} kubectl -n sms-local delete pvc {}

But then I had this

kubectl -n smt-local get pvc | tail -n +2 | grep -v Bound | \
  awk '{print $1}' | xargs -I{} kubectl -n sms-local delete pvc {}
Error from server (NotFound): persistentvolumeclaims "code-pv-claim" not found
Error from server (NotFound): persistentvolumeclaims "mysql-pv-claim" not found

Even

kubectl -n smt-local get pvc | tail -n +2 | grep -v Lost | \ 
  awk '{print $1}' | xargs -I{} kubectl -n sms-local delete pvc {}
Error from server (NotFound): persistentvolumeclaims "code-pv-claim-v2" not found
Error from server (NotFound): persistentvolumeclaims "mysql-pv-claim-v2" not found
Error from server (NotFound): persistentvolumeclaims "redis-pv-claim" not found

caused more issues… thought I lost the new versions but none were found for some reason…

Update and Solution

My bad. It did work with proper namespaces:

`

kubectl -n default get pvc | tail -n +2 | grep -v Bound | \
  awk '{print $1}' | xargs -I{} kubectl -n default  delete pvc {}
persistentvolumeclaim "mysql-pv-claim" deleted
persistentvolumeclaim "redis-pv-claim" deleted
➜  smt-deploy git:(main) ✗ kubectl -n smt-local get pvc | tail -n +2 | grep -v Bound | \
  awk '{print $1}' | xargs -I{} kubectl -n smt-local delete pvc {}
persistentvolumeclaim "code-pv-claim" deleted
persistentvolumeclaim "mysql-pv-claim" deleted

Once done I could create the same persistent volume claims with old names and then remove version 2 claims. This as I of course had nothing to lose here…

Note
I also now see containers being created :slight_smile:

smt-deploy git:(main) ✗ kubectl get pods -n smt-local 
NAME                     READY   STATUS              RESTARTS   AGE
app-5b5c586456-7n4wh     0/2     ContainerCreating   0          84m
app-5b5c586456-lxvct     0/2     ContainerCreating   0          84m
app-5b5c586456-rvk6j     0/2     ContainerCreating   0          84m
mysql-6586977d97-l442h   0/1     ErrImageNeverPull   0          83m

Well, all except for the MySQL one. But for the app it is happening it seems and we did not have that before I recall as all pods were pending. Now we see

kubectl get all -n smt-local 
NAME                         READY   STATUS              RESTARTS   AGE
pod/app-5b5c586456-7n4wh     0/2     ContainerCreating   0          87m
pod/app-5b5c586456-lxvct     0/2     ContainerCreating   0          87m
pod/app-5b5c586456-rvk6j     0/2     ContainerCreating   0          87m
pod/mysql-6586977d97-l442h   0/1     ErrImageNeverPull   0          85m

NAME                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/app-service   ClusterIP   10.104.171.106   <none>        8080/TCP   80m

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/app     0/3     3            0           87m
deployment.apps/mysql   0/1     1            0           85m

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/app-5b5c586456     3         3         0       87m
replicaset.apps/mysql-6586977d97   1         1         0       85m