I am having issues connecting my app to the persistent volume on my local MacOS Minikube.
Cluster information:
Kubernetes 1.19.4:
kubectl version -o json
{
"clientVersion": {
"major": "1",
"minor": "19",
"gitVersion": "v1.19.4",
"gitCommit": "d360454c9bcd1634cf4cc52d1867af5491dc9c5f",
"gitTreeState": "clean",
"buildDate": "2020-11-12T01:09:16Z",
"goVersion": "go1.15.4",
"compiler": "gc",
"platform": "darwin/amd64"
},
"serverVersion": {
"major": "1",
"minor": "19",
"gitVersion": "v1.19.4",
"gitCommit": "d360454c9bcd1634cf4cc52d1867af5491dc9c5f",
"gitTreeState": "clean",
"buildDate": "2020-11-11T13:09:17Z",
"goVersion": "go1.15.2",
"compiler": "gc",
"platform": "linux/amd64"
}
}
Installation method:
Host OS: macOS 10.15.7
I checked one of the pods first:
kubectl describe pod app-5b5c586456-7n4wh -n smt-local
Name: app-5b5c586456-7n4wh
Namespace: smt-local
Priority: 0
Node: <none>
Labels: app=web
pod-template-hash=5b5c586456
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/app-5b5c586456
Containers:
laravel:
Image: smart48/smt-laravel:latest
Port: 9000/TCP
Host Port: 0/TCP
Limits:
cpu: 500m
Requests:
cpu: 250m
Environment: <none>
Mounts:
/data/smtapp from code-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-cp4nw (ro)
nginx:
Image: smart48/smt-nginx:latest
Port: 9376/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/data/nginx/config from nginx-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-cp4nw (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
code-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: code-pv-claim
ReadOnly: false
nginx-config:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: code-pv-claim
ReadOnly: false
default-token-cp4nw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-cp4nw
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 47s (x19 over 22m) default-scheduler running "VolumeBinding" filter plugin for pod "app-5b5c586456-7n4wh": could not find v1.PersistentVolume "pvc-fb51a73b-9a33-4ccd-8552-27b887ff145e"
and saw that the persistent volume could not be found for
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
namespace: smt-local
spec:
replicas: 3
selector:
matchLabels:
app: web
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: web
spec:
containers:
- name: laravel
image: smart48/smt-laravel:latest
ports:
- containerPort: 9000
resources:
requests:
cpu: 250m
limits:
cpu: 500m
volumeMounts:
- name: code-storage
mountPath: /data/smtapp
# examples
# - name: orientdb-config
# mountPath: /data/orientdb/config
# - name: orientdb-databases
# mountPath: /data/orientdb/databases
# - name: orientdb-backup
# mountPath: /data/orientdb/backup
- name: nginx
image: smart48/smt-nginx:latest
ports:
- containerPort: 9376
volumeMounts:
# example
- name: nginx-config
mountPath: /data/nginx/config
volumes:
- name: code-storage
persistentVolumeClaim:
claimName: code-pv-claim
- name: nginx-config
persistentVolumeClaim:
claimName: code-pv-claim
# examples
# - name: orientdb-config
# persistentVolumeClaim:
# claimName: orientdb-pv-claim
# - name: orientdb-databases
# persistentVolumeClaim:
# claimName: orientdb-pv-claim
# - name: orientdb-backup
# persistentVolumeClaim:
# claimName: orientdb-pv-claim
I then checked the persistent volumes:
➜ smt-deploy git:(main) ✗ kubectl get pvc -n smt-local
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
code-pv-claim Lost pvc-fb51a73b-9a33-4ccd-8552-27b887ff145e 0 standard 21h
mysql-pv-claim Lost pvc-2c47dbca-f236-4962-a67c-b50f7a2e9cef 0 standard 22h
redis-pv-claim Bound pvc-f7329ae0-1e03-4e16-8e1e-7ebc3de9b591 5Gi RWO standard 29m
And saw two of them I just this morning set up again after I removed all yesterday do list but show up with status lost.
My pvc.yaml
is
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: code-pv-claim
namespace: smt-local
labels:
type: code
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
and my persistent volume pv.yaml
is
apiVersion: v1
kind: PersistentVolume
metadata:
name: code-pv
namespace: smt-local
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 200Mi
hostPath:
# minikube is configured to persist files stored under /data/ and a few
# other directories such as /var/lib/minikube in the _vm_
path: /data
See all here https://github.com/smart48/smt-deploy/tree/main/local as well by the way.
That we see
kubectl get pvc -n smt-local
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
code-pv-claim Lost pvc-fb51a73b-9a33-4ccd-8552-27b887ff145e 0 standard 21h
mysql-pv-claim Lost pvc-2c47dbca-f236-4962-a67c-b50f7a2e9cef 0 standard 22h
redis-pv-claim Bound pvc-f7329ae0-1e03-4e16-8e1e-7ebc3de9b591 5Gi RWO standard 50m
with two claims lost is because yesterday I did remove the volumes completely but today I ran the files again to create them so why does that just not happen?
What am I missing here? How can I make the application load the volume properly on my Minikube?