Use NFS for Persistent Volumes on MicroK8s

In this how-to we will explain how to provision NFS mounts as Kubernetes Persistent Volumes on MicroK8s. If you run into difficulties, please see the troubleshooting section at the end!

Setup an NFS server

Caution: This section will show you how to configure a simple NFS server on Ubuntu for the purpose of this tutorial. This is not a production-grade NFS setup.

If you don’t have a suitable NFS server already, you can simply create one on a local machine with the following commands on Ubuntu:

sudo apt-get install nfs-kernel-server

Create a directory to be used for NFS:

sudo mkdir -p /srv/nfs
sudo chown nobody:nogroup /srv/nfs
sudo chmod 0777 /srv/nfs

Edit the /etc/exports file. Make sure that the IP addresses of all your MicroK8s nodes are able to mount this share. For example, to allow all IP addresses in the 10.0.0.0/24 subnet:

sudo mv /etc/exports /etc/exports.bak
echo '/srv/nfs 10.0.0.0/24(rw,sync,no_subtree_check)' | sudo tee /etc/exports

Finally, restart the NFS server:

sudo systemctl restart nfs-kernel-server

For other OSes, follow specific documentation.

Install the CSI driver for NFS

We will use the upstream NFS CSI driver. First, we will deploy the NFS provisioner using the official Helm chart.

Enable the Helm3 addon (if not already enabled) and add the repository for the NFS CSI driver:

microk8s enable helm3
microk8s helm3 repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts
microk8s helm3 repo update

Then, install the Helm chart under the kube-system namespace with:

microk8s helm3 install csi-driver-nfs csi-driver-nfs/csi-driver-nfs \
    --namespace kube-system \
    --set kubeletDir=/var/snap/microk8s/common/var/lib/kubelet

After deploying the Helm chart, wait for the CSI controller and node pods to come up using the following kubectl command …

microk8s kubectl wait pod --selector app.kubernetes.io/name=csi-driver-nfs --for condition=ready --namespace kube-system

… which, once successful, will produce output similar to:

pod/csi-nfs-controller-67bd588cc6-7vvn7 condition met
pod/csi-nfs-node-qw8rg condition met

At this point, you should also be able to list the available CSI drivers in your Kubernetes cluster …

microk8s kubectl get csidrivers

… and see nfs.csi.k8s.io in the list:

NAME             ATTACHREQUIRED   PODINFOONMOUNT   STORAGECAPACITY   TOKENREQUESTS   REQUIRESREPUBLISH   MODES        AGE
nfs.csi.k8s.io   false            false            false             <unset>         false               Persistent   39m

Create a StorageClass for NFS

Next, we will need to create a Kubernetes Storage Class that uses the nfs.csi.k8s.io CSI driver. Assuming you have configured an NFS share /srv/nfs and the address of your NFS server is 10.0.0.42, create the following file:

# sc-nfs.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
  server: 10.0.0.42
  share: /srv/nfs
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
  - hard
  - nfsvers=4.1

Note: The last line of the above YAML indicates a specific version of NFS. This should match the version of the NFS server being used - if you are using an existing service please check which version it uses and adjust accordingly.

Then apply it on your MicroK8s cluster with:

microk8s kubectl apply -f - < sc-nfs.yaml

Create a new PVC

The final step is to create a new PersistentVolumeClaim using the nfs-csi storage class. This is as simple as specifying storageClassName: nfs-csi in the PVC definition, for example:

# pvc-nfs.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  storageClassName: nfs-csi
  accessModes: [ReadWriteOnce]
  resources:
    requests:
      storage: 5Gi

Then create the PVC with:

microk8s kubectl apply -f - < pvc-nfs.yaml

If everything has been configured correctly, you should be able to check the PVC…

microk8s kubectl describe pvc my-pvc

… and see that a volume was provisioned successfully:

Name:          my-pvc
Namespace:     default
StorageClass:  nfs-csi
Status:        Bound
Volume:        pvc-5676d353-4d46-49a2-b7ff-bdd4603d2c06
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: nfs.csi.k8s.io
               volume.kubernetes.io/storage-provisioner: nfs.csi.k8s.io
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      5Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type     Reason                 Age                    From                                                           Message
  ----     ------                 ----                   ----                                                           -------
  Normal   ExternalProvisioning   2m59s (x2 over 2m59s)  persistentvolume-controller                                    waiting for a volume to be created, either by external provisioner "nfs.csi.k8s.io" or manually created by system administrator
  Normal   Provisioning           2m58s (x2 over 2m59s)  nfs.csi.k8s.io_andromeda_61e4b876-324d-4f52-a5c3-f26047fbbc97  External provisioner is provisioning volume for claim "default/my-pvc"
  Normal   ProvisioningSucceeded  2m58s                  nfs.csi.k8s.io_andromeda_61e4b876-324d-4f52-a5c3-f26047fbbc97  Successfully provisioned volume pvc-5676d353-4d46-49a2-b7ff-bdd4603d2c06

That’s it! You can now use this PVC to run stateful workloads on your MicroK8s cluster.

Common Issues

The NFS CSI controller and node pods are getting stuck in `Pending` state

Make sure that you specify --set kubeletDir=/var/snap/microk8s/common/var/lib/kubelet when installing the Helm chart.

I created the nfs-csi storage class, but cannot provision volumes

Double-check that you have specified the NFS server IP address and share path correctly. Also, make sure that your MicroK8s node can mount NFS shares. If you are running a cluster, all MicroK8s nodes should be allowed to mount NFS shares.

Provisioning new volumes fails, but I've done everything else correctly

Check the logs of the

nfs
containers in the controller and node pods, using the following commands:

microk8s kubectl logs --selector app=csi-nfs-controller -n kube-system -c nfs
microk8s kubectl logs --selector app=csi-nfs-node -n kube-system -c nfs

The logs should help with debugging any issues.

1 Like

Instead of:

You probably wanted something like this:

echo '/srv/nfs 10.0.0.0/24(rw,sync,no_subtree_check)' | sudo tee /etc/exports
2 Likes

At final step during kubectl describe, I had a permission denied error. I resolved this by making sure my NFS server squash setting was set to “no mapping” instead of “squash all users to admin.”

1 Like

Thank you for your information. I have question about NFS PV.

Now the mounted storage’s owner is nobody:nobody (UID=65534) with permission 775.
And in the pod, I cannot change owner of the storage.
I hope to use non-root user for processes in the pod for security.
Resultingly, processes in pod don’t have permission to write on NFS storage.
For this case, is there any way to avoid this permission problem?
May I have your opinion for it?

Assume that I can not change NFS server side configuration (owner=nobody and permssion=775) for security. And pod side have to use non-root user for processes as well.

Thanks for the question. I’m going to talk to the rest of the MicroK8s team and I hope we can get back to you with some suggestions (which we can then add to the docs, so thanks!)

Hey @evilnick. I have a question regarding the use of CSI vs a static nfs PV for this tutorial.

I understand the NFS CSI offers dynamic PV provisioning, but if that’s not needed for our use case is it still recommended / supported to use a static nfs PV (as per the kubernetes docs here in microk8s? We have tried to use a static nfs PV with microk8s and it works as long as nfs-common in installed on our ubuntu host.

There are members on our team who think that using CSI is the only recommended/supported way of using NFS in microk8s because of this tutorial, and there’s no reasoning for the use of the CSI in this doc (which would be super helpful for the uninitiated like myself).

Thank you!

IMHO, though the static nfs still works, but i think the use of csi in the long term trumps the static nfs pv. Node migration/upgrade will certainly benefit from the use of csi.

However, as long as kubernetes supports the static nfs pv, MicroK8s will always have it. It is a certified Kubernetes distro.

Thanks for the response @balchua1, really appreciate it. We have a somewhat unique use case where we’re shipping an appliance and allow customers to add data by adding the connection details to external NFS shares at runtime. Only a single pod will mount the volume to read some data. So I think a CSI is overkill for this, but maybe I don’t totally understand what the advantages are.

Would it be possible to add more details about CSI in this tutorial? Or maybe offer 2 options like 1. Mount NFS using a PV/PVC 2. Use a CSI in production because you need X,Y and Z

Thanks!

Consider adding another troubleshooting item:


The StorageClass config in this document uses nfsvers=4.1 in the mountOptions. If you already have NFS storage configured and skip step one, this version should match your existing NFS storage system, which may use an older protocol version.

You can use the command rpcinfo <your_nfs_server_ip> to see what NFS versions your provider offers.

The StorageClass config must use a version your NFS provider offers, otherwise PersistentVolumeClaims will fail with the error mount.nfs: Protocol not supported.


I experienced this today when trying to connect my TRUENAS NFS with this storage class config, not realizing that my version of TRUENAS only supports NFS versions 2 and 3, but not 4. I had to change the config from nfsvers=4.1 to nfsvers=3 for the pvc to connect, mount, and provision successfully.

Hello @evilnick

I know this is a stupid question, however where is this installed on “2. Install the CSI driver for NFS”? I am assuming on one of the nodes (Master?) (One of the Workers?)

I have a 4 node cluster on Raspberry Pi 4’s and it is a Master w/3 Worker Nodes.

Thanks for your time and the great article,
Michael

1 Like

Hi - did you get it working yet? The driver is just deployed to a pod, it shouldn’t matter which node

Hello evilnick,

I appreciate your response, I am going to try it this weekend. I will let ya know the results.

Thank you again,

1 Like

How am I suppose to know which IP my just installed NFS Server has?

@Diego_Morais setting up a production grade NFS server is outside the scope of this page. If you have followed step one in these instructions and set up a local server to test, it will be the address of the machine you ran that step on.

Thank you for the article.
I have tried to get this working in minikube version: v1.30.1 (Using vbox and docker drivers) but am running into a few issues.

The first issue when the controller is deploying is:
Message: MountVolume.SetUp failed for volume “pods-mount-dir” : hostPath type check failed: /var/snap/microk8s/common/var/lib/kubelet/pods is not a directory
I checked older versions to 1.20.1 but none of them have this directory.

I can change kubletDir=/var/lib/kubelet for the chart installation. Which then allows the pods to deploy correctly.

I can create the storage class.
But when I create the pvc I see this message:
Warning ProvisioningFailed 4s (x6 over 36s) nfs.csi.k8s.io_minikube_a8709991-6ee2-4464-b952-56e18580da4f failed to provision volume with StorageClass “nfs-csi”: rpc error: code = Internal desc = failed to make subdirectory: mkdir /tmp/pvc-7f6da8d9-0bb5-413d-b6e9-5adc4a4c2224/pvc-7f6da8d9-0bb5-413d-b6e9-5adc4a4c2224: read-only file system

Thanks for any thoughts on this.

I’m a little confused about which specific version of MicroK8s you are using.
The dir /var/snap/microk8s/common/var/lib/kubletshould definitely exist

Using MicroK8s v1.27.2 revision 5372
The storage controller is deploying correctly in MicroK8’s.
The Storage class is created OK.

But the PVC still shows:
Warning ProvisioningFailed 30s (x8 over 2m37s) nfs.csi.k8s.io_xxxxxx-XPS-15-7590_2f40d2e2-337b-4e2e-b951-acbd13989e55 failed to provision volume with StorageClass “nfs-csi”: rpc error: code = Internal desc = failed to make subdirectory: mkdir /tmp/pvc-ccc72e87-a353-4c9d-9b71-eca565b215ca/pvc-ccc72e87-a353-4c9d-9b71-eca565b215ca: read-only file system

Might be an issue with the nfs share rather than microk8s. Could you look at the logs:

microk8s kubectl logs --selector app=csi-nfs-controller -n kube-system -c nfs
microk8s kubectl logs --selector app=csi-nfs-node -n kube-system -c nfs

Thanks for having a look at this.
Double checked the permissions on the nfs server and is now working.

1 Like

Hi @evilnick

Two basic questions, being almost new to NFS security. I know this is probably more on the NFS server side (not the point of the article), but I’d like a very little assessment on the settings you did on the NFS filesystem permissions (nobody:nogroup, 0777) and on the export (rw,sync,no_subtree_check). Are those needed to make the CSI work properly? Are they meant for a balance between usability and security?

Thank you very much,