Mount a whole NFS Share as PV/C

Cluster information:

Kubernetes version: 1.29.5 (k3s)
Cloud being used: Bare Metal
Installation method: Automated install script (https://get.k3s.io)
Host OS: 2x Armbian 24
CNI and version: Flannel as bundled in k3s
CRI and version: Containerd (bundled) 1.17.5

Hello there!

I have a two-node cluster and I want to migrate more of my old Docker Compose deployments into k3s. One of them is Jellyfin. Naturally, Jellyfin needs access to my media; and in the same namespace, I plan to add TubeArchivist, for which I need very deterministic paths to make things work out (Jellyfin and TA paths must match).

So I tried to create a PV and PVC that I could reuse in that namespace across all the deployments to just mount the entire NFS share as a volume instead; same paths across all the deployments that need them. I will later refine that into using subPath, but for now I just want to mount the entire storage.

For this purpose, I made this exemplary deployment:

apiVersion: v1
kind: Namespace
metadata:
  name: jellyfin
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: bunker-generic-pv
  namespace: jellyfin
  labels:
    store: bunker
spec:
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  capacity:
    storage: 10Ti
  nfs:
    server: 100.64.0.11
    path: /mnt/vol1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: bunker-generic-pvc
  namespace: jellyfin
spec:
  selector:
    matchLabels:
      store: bunker
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Ti
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-mounter
  namespace: jellyfin
  labels:
    app: example-mounter
spec:
  selector:
    matchLabels:
      app: example-mounter
  template:
    metadata:
      labels:
        app: example-mounter
    spec:
      volumes:
        - name: bunker-generic-vol
          persistentVolumeClaim:
            claimName: bunker-generic-pvc
      containers:
        - name: app
          image: alpine:stable
          command: ["/bin/sh", "-c", "sleep infinity"]
          volumeMounts:
            - mountPath: /mnt/vol1
              name: bunker-generic-vol

But when deployed, I see this in my events:

jellyfin      0s (x19 over 4m17s)     Normal    ExternalProvisioning   PersistentVolumeClaim/bunker-generic-pvc   Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

I am a little confused what I did wrong; there is no storage class required, it’s plain NFS directly through .spec on the PV and the PVC is just ment to “link” against that. So, I assume I must have gotten something quite wrong :slight_smile:

Can you help me get this correct?

Thank you and kind regards,
Ingwie

The solution was to set the PVC’s .spec.storageClassName to an empty string, literally.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: bunker-generic-pvc
  namespace: jellyfin
spec:
  storageClassName: ""
  selector:
    matchLabels:
      store: bunker
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Ti

Can’t seem to “get rid” of the storage annotations although they mean literally nothing and could be 0 as Kubernetes doesn’t really bother with external storage like NFS … at least, according to ChatGPT. Had to ask it; didn’t find anyone who could answer me this in several attempts. ^^;

To save those who run into this too, the above deployment with the included “fix” will properly mount the entire NFS share as intended.

SubPaths work too, I am quite sure. So this makes mapping shared folders a little more “clean”.