Use NFS for Persistent Volumes on MicroK8s

May be this link will help you in finding some points for your questions.

2 Likes

what is microk8s command in this part ?

Hi, sorry Iā€™m not sure about the question. Which part are you having difficulty with?

i mean that all this discuss, there many commands begin with microk8s , such microk8s kubectl describe pvc my-pvc or microk8s kubectl apply -f - < pvc-nfs.yaml. so what is this command, how i install this command in my cluster?

Hi, this discussion particularly relates to using NFS with MicroK8s.
If you install MicroK8s the microk8scommand will be available.

Error in pvc-nfs from installation guide

[node@master test]$ kubectl describe pvc my-pvc
Name:          my-pvc
Namespace:     default
StorageClass:  nfs-csi
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: nfs.csi.k8s.io
               volume.kubernetes.io/storage-provisioner: nfs.csi.k8s.io
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type     Reason              Age                  From                                                       Message
  ----     ------              ----                 ----                                                       -------
  Normal   Provisioning        52s (x7 over 2m55s)  nfs.csi.k8s.io_node5_865e9280-866f-4bb6-a2b8-484cf33adb80  External provisioner is provisioning volume for claim "default/my-pvc"
  Warning  ProvisioningFailed  42s (x7 over 2m45s)  nfs.csi.k8s.io_node5_865e9280-866f-4bb6-a2b8-484cf33adb80  failed to provision volume with StorageClass "nfs-csi": rpc error: code = Internal desc = failed to mount nfs server: rpc error: code = Internal desc = mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs -o hard,nfsvers=3 10.164.151.1:/srv/nfs/kube /tmp/pvc-5de49579-65e4-456c-b0d9-36fe91f0aebf
Output: /usr/sbin/start-statd: 10: cannot create /run/rpc.statd.lock: Read-only file system
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
  Normal  ExternalProvisioning  12s (x13 over 2m56s)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "nfs.csi.k8s.io" or manually created by system administrator

but Iā€™ve tried only command and it works:

[node@master ~]$ sudo mount -t nfs -o hard,nfsvers=3 10.164.151.1:/srv/nfs/kube nfs
[node@master ~]$ mount | grep nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
10.164.151.1:/srv/nfs/kube on /home/node/nfs type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.164.151.1,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=10.164.151.1)
[node@master ~]$ 

Anyone can help me please?
Iā€™ve tried this add-on with NFS too and works normal, but y need to use my nfs server:

[node@master test]$ kubectl get pvc

NAME      STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
my-pvc    Pending                                                                        nfs-csi        11m
pvc-nfs   Bound     pvc-53fc6f1e-13a2-41ae-ba3a-7d527a049e33   100Mi      RWX            nfs            8s
[node@master test]$ 
[node@master test]$ kubectl describe pvc pvc-nfs
Name:          pvc-nfs
Namespace:     default
StorageClass:  nfs
Status:        Bound
Volume:        pvc-53fc6f1e-13a2-41ae-ba3a-7d527a049e33
Labels:        vol=pvc-nfs
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: cluster.local/nfs-server-provisioner
               volume.kubernetes.io/storage-provisioner: cluster.local/nfs-server-provisioner
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      100Mi
Access Modes:  RWX
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type    Reason                 Age                From                                                                                                Message
  ----    ------                 ----               ----                                                                                                -------
  Normal  ExternalProvisioning   25s (x2 over 25s)  persistentvolume-controller                                                                         waiting for a volume to be created, either by external provisioner "cluster.local/nfs-server-provisioner" or manually created by system administrator
  Normal  Provisioning           24s                cluster.local/nfs-server-provisioner_nfs-server-provisioner-0_56c197ea-5d92-4dd2-9132-6e02ac77c306  External provisioner is provisioning volume for claim "default/pvc-nfs"
  Normal  ProvisioningSucceeded  23s                cluster.local/nfs-server-provisioner_nfs-server-provisioner-0_56c197ea-5d92-4dd2-9132-6e02ac77c306  Successfully provisioned volume pvc-53fc6f1e-13a2-41ae-ba3a-7d527a049e33

Do we have any solution for this??
If yes cloud you suggest.
Now the mounted storageā€™s owner is nobody:nobody (UID=65534) with permission 775.
And in the pod, I cannot change owner of the storage.
I hope to use non-root user for processes in the pod for security.
Resultingly, processes in pod donā€™t have permission to write on NFS storage.
For this case, is there any way to avoid this permission problem?
May I have your opinion for it?

Assume that I can not change NFS server side configuration (owner=nobody and permssion=775) for security. And pod side have to use non-root user for processes as well.

Hi, sorry for the delayed response, for some reason I didnā€™t see this mention.

The driver is an upstream project. I think the idea of non-anonymous access has been brought up before but I donā€™t believe it has been implemented.

e.g.

Hi

Thanks for this article.

I found it, and the discussion very helpful in my troubleshooting efforts, specifically the posts regarding dynamic provisioning vs static nfs pvc.

I am posting this comment in the hope that it may assist others.

With static provisioning, I am getting an error regarding the /var/snap/ directory being read only:

Warning  FailedMount  3m32s (x448 over 14h)  kubelet  MountVolume.SetUp failed for volume "pvc-1c0b7728-5ebd-4bb0-85ba-0f0877ab9080" : rpc error: code = Internal desc = mkdir /var/snap: read-only file system

I see from the driver source https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/charts/latest/csi-driver-nfs/templates/csi-nfs-node.yaml that the kubeletDir/pods, kubeletDir/plugins/csi-nfsplugin and kubeletDir/plugins_registry objects are referenced, but all only accessible by root.

On the other hand, I did not experience this with dynamic provisioning.

A couple of other comments I can make based on what I have found during troubleshooting:

  • I know this may be nfs server specific, but I found (man 5 exports on ubuntu 22.04) that sync and no_subtree_check are both defaults, so in my case I can remove redundant options in the /etc/exports entry
  • My installation of microk8s (ā€“channel=1.28/stable --classic) placed a symbolic link at /var/lib/kubelet to /var/snap/microk8s/common/var/lib/kubelet, making (in my case) the overriding of kubeletDir redundant.
    Neither of these would be expected to cause a problem, but simplification is possible.

Thanks again

Kind Regards
Martin

1 Like
root@vmi1703919:~# microk8s kubectl logs --selector app=csi-nfs-node -n kube-system -c nfs
I0514 13:34:04.546980       1 server.go:117] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"}
I0514 13:34:05.197659       1 utils.go:109] GRPC call: /csi.v1.Identity/GetPluginInfo
I0514 13:34:05.197683       1 utils.go:110] GRPC request: {}
I0514 13:34:05.199538       1 utils.go:116] GRPC response: {"name":"nfs.csi.k8s.io","vendor_version":"v4.7.0"}
I0514 13:34:05.326910       1 utils.go:109] GRPC call: /csi.v1.Identity/GetPluginInfo
I0514 13:34:05.326956       1 utils.go:110] GRPC request: {}
I0514 13:34:05.326987       1 utils.go:116] GRPC response: {"name":"nfs.csi.k8s.io","vendor_version":"v4.7.0"}
I0514 13:34:06.328361       1 utils.go:109] GRPC call: /csi.v1.Node/NodeGetInfo
I0514 13:34:06.328384       1 utils.go:110] GRPC request: {}
I0514 13:34:06.328800       1 utils.go:116] GRPC response: {"node_id":"vmi1703919"}
I0514 13:34:52.604752       1 server.go:117] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"}
I0514 13:34:53.385357       1 utils.go:109] GRPC call: /csi.v1.Identity/GetPluginInfo
I0514 13:34:53.385437       1 utils.go:110] GRPC request: {}
I0514 13:34:53.392769       1 utils.go:116] GRPC response: {"name":"nfs.csi.k8s.io","vendor_version":"v4.7.0"}
I0514 13:34:53.443139       1 utils.go:109] GRPC call: /csi.v1.Identity/GetPluginInfo
I0514 13:34:53.443295       1 utils.go:110] GRPC request: {}
I0514 13:34:53.443447       1 utils.go:116] GRPC response: {"name":"nfs.csi.k8s.io","vendor_version":"v4.7.0"}
I0514 13:34:53.622122       1 utils.go:109] GRPC call: /csi.v1.Node/NodeGetInfo
I0514 13:34:53.622176       1 utils.go:110] GRPC request: {}
I0514 13:34:53.622298       1 utils.go:116] GRPC response: {"node_id":"vmi1703920"}
I0514 13:34:28.413032       1 server.go:117] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"}
I0514 13:34:28.987993       1 utils.go:109] GRPC call: /csi.v1.Identity/GetPluginInfo
I0514 13:34:28.988013       1 utils.go:110] GRPC request: {}
I0514 13:34:28.992196       1 utils.go:116] GRPC response: {"name":"nfs.csi.k8s.io","vendor_version":"v4.7.0"}
I0514 13:34:29.146458       1 utils.go:109] GRPC call: /csi.v1.Identity/GetPluginInfo
I0514 13:34:29.146568       1 utils.go:110] GRPC request: {}
I0514 13:34:29.146701       1 utils.go:116] GRPC response: {"name":"nfs.csi.k8s.io","vendor_version":"v4.7.0"}
I0514 13:34:29.718305       1 utils.go:109] GRPC call: /csi.v1.Node/NodeGetInfo
I0514 13:34:29.718513       1 utils.go:110] GRPC request: {}
I0514 13:34:29.718658       1 utils.go:116] GRPC response: {"node_id":"vmi1703921"}
root@vmi1703919:~# microk8s kubectl logs --selector app=csi-nfs-controller -n kube-system -c nfs

E0514 13:43:41.934825       1 utils.go:114] GRPC error: rpc error: code = Internal desc = failed to mount nfs server: rpc error: code = Internal desc = mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs -o hard,nfsvers=4.1 10.0.0.4:/srv/nfs /tmp/pvc-2e7970ff-fe9a-4401-81cc-b2a134ebda1f
Output: mount.nfs: No route to host
I0514 13:43:49.938104       1 utils.go:109] GRPC call: /csi.v1.Controller/CreateVolume
I0514 13:43:49.938149       1 utils.go:110] GRPC request: {"capacity_range":{"required_bytes":5368709120},"name":"pvc-2e7970ff-fe9a-4401-81cc-b2a134ebda1f","parameters":{"csi.storage.k8s.io/pv/name":"pvc-2e7970ff-fe9a-4401-81cc-b2a134ebda1f","csi.storage.k8s.io/pvc/name":"my-pvc","csi.storage.k8s.io/pvc/namespace":"default","server":"10.0.0.4","share":"/srv/nfs"},"volume_capabilities":[{"AccessType":{"Mount":{"mount_flags":["hard","nfsvers=4.1"]}},"access_mode":{"mode":7}}]}
I0514 13:43:49.938711       1 controllerserver.go:462] internally mounting 10.0.0.4:/srv/nfs at /tmp/pvc-2e7970ff-fe9a-4401-81cc-b2a134ebda1f
I0514 13:43:49.938789       1 nodeserver.go:132] NodePublishVolume: volumeID(10.0.0.4#srv/nfs#pvc-2e7970ff-fe9a-4401-81cc-b2a134ebda1f##) source(10.0.0.4:/srv/nfs) targetPath(/tmp/pvc-2e7970ff-fe9a-4401-81cc-b2a134ebda1f) mountflags([hard nfsvers=4.1])
I0514 13:43:49.938856       1 mount_linux.go:218] Mounting cmd (mount) with arguments (-t nfs -o hard,nfsvers=4.1 10.0.0.4:/srv/nfs /tmp/pvc-2e7970ff-fe9a-4401-81cc-b2a134ebda1f)
root@vmi1703919:~# ping 10.0.0.4
PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=2.43 ms
64 bytes from 10.0.0.4: icmp_seq=2 ttl=64 time=0.479 ms
64 bytes from 10.0.0.4: icmp_seq=3 ttl=64 time=0.505 ms
64 bytes from 10.0.0.4: icmp_seq=4 ttl=64 time=0.473 ms
^C
--- 10.0.0.4 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3027ms
rtt min/avg/max/mdev = 0.473/0.972/2.434/0.843 ms
root@vmi1703919:~#

I actually have this exact same problem. Is there a solution to this @evilnick ? Currently, I canā€™t seem to get nfs-csi PVs to work when the running service needs to change the filesystem owner at startup (like RabbitMQā€™s Cluster Operator, for example)

Sadly that issue for the driver is still open. If anyone has found a useful workaround Iā€™m happy to include it in the docs

So I ā€œgot around itā€, but I donā€™t 100% remember how except a bunch of initial trial and error. I believe I had to change permissions on the PVC directory after it was created by K8S. Setting it on the root directory of the NFS mount path (e.g. /srv/nfs) doesnā€™t look to be enough.

1 Like