Use NFS for Persistent Volumes

In this how-to we will explain how to provision NFS mounts as Kubernetes Persistent Volumes on MicroK8s.

1. Setup an NFS server

Caution: This section will show you how to configure a simple NFS server on Ubuntu for the purpose of this tutorial. This is not a production-grade NFS setup.

If you don’t have a suitable NFS server already, you can simply create one on a local machine with the following commands on Ubuntu:

sudo apt-get install nfs-kernel-server

Create a directory to be used for NFS:

sudo mkdir -p /srv/nfs
sudo chown nobody:nogroup /srv/nfs
sudo chmod 0777 /srv/nfs

Edit the /etc/exports file. Make sure that the IP addresses of all your MicroK8s nodes are able to mount this share. For example, to allow all IP addresses in the subnet:

sudo mv /etc/exports /etc/exports.bak
echo '/srv/nfs,sync,no_subtree_check)' | sudo tee /etc/exports

Finally, restart the NFS server:

sudo systemctl restart nfs-kernel-server

For other OSes, follow specific documentation.

2. Install the CSI driver for NFS

We will use the upstream NFS CSI driver. First, we will deploy the NFS provisioner using the official Helm chart.

Enable the Helm3 addon (if not already enabled) and add the repository for the NFS CSI driver:

microk8s enable helm3
microk8s helm3 repo add csi-driver-nfs
microk8s helm3 repo update

Then, install the Helm chart under the kube-system namespace with:

microk8s helm3 install csi-driver-nfs csi-driver-nfs/csi-driver-nfs \
    --namespace kube-system \
    --set kubeletDir=/var/snap/microk8s/common/var/lib/kubelet

After deploying the Helm chart, wait for the CSI controller and node pods to come up using the following kubectl command …

microk8s kubectl wait pod --selector --for condition=ready --namespace kube-system

… which, once successful, will produce output similar to:

pod/csi-nfs-controller-67bd588cc6-7vvn7 condition met
pod/csi-nfs-node-qw8rg condition met

At this point, you should also be able to list the available CSI drivers in your Kubernetes cluster …

microk8s kubectl get csidrivers

… and see in the list:

NAME             ATTACHREQUIRED   PODINFOONMOUNT   STORAGECAPACITY   TOKENREQUESTS   REQUIRESREPUBLISH   MODES        AGE   false            false            false             <unset>         false               Persistent   39m

3. Create a StorageClass for NFS

Next, we will need to create a Kubernetes Storage Class that uses the CSI driver. Assuming you have configured an NFS share /srv/nfs and the address of your NFS server is, create the following file:

# sc-nfs.yaml
kind: StorageClass
  name: nfs-csi
  share: /srv/nfs
reclaimPolicy: Delete
volumeBindingMode: Immediate
  - hard
  - nfsvers=4.1

Note: The last line of the above YAML indicates a specific version of NFS. This should match the version of the NFS server being used - if you are using an existing service please check which version it uses and adjust accordingly.

Then apply it on your MicroK8s cluster with:

microk8s kubectl apply -f - < sc-nfs.yaml

4. Create a new PVC

The final step is to create a new PersistentVolumeClaim using the nfs-csi storage class. This is as simple as specifying storageClassName: nfs-csi in the PVC definition, for example:

# pvc-nfs.yaml
apiVersion: v1
kind: PersistentVolumeClaim
  name: my-pvc
  storageClassName: nfs-csi
  accessModes: [ReadWriteOnce]
      storage: 5Gi

Then create the PVC with:

microk8s kubectl apply -f - < pvc-nfs.yaml

If everything has been configured correctly, you should be able to check the PVC…

microk8s kubectl describe pvc my-pvc

… and see that a volume was provisioned successfully:

Name:          my-pvc
Namespace:     default
StorageClass:  nfs-csi
Status:        Bound
Volume:        pvc-5676d353-4d46-49a2-b7ff-bdd4603d2c06
Labels:        <none>
Annotations: yes
Finalizers:    []
Capacity:      5Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       <none>
  Type     Reason                 Age                    From                                                           Message
  ----     ------                 ----                   ----                                                           -------
  Normal   ExternalProvisioning   2m59s (x2 over 2m59s)  persistentvolume-controller                                    waiting for a volume to be created, either by external provisioner "" or manually created by system administrator
  Normal   Provisioning           2m58s (x2 over 2m59s)  nfs.csi.k8s.io_andromeda_61e4b876-324d-4f52-a5c3-f26047fbbc97  External provisioner is provisioning volume for claim "default/my-pvc"
  Normal   ProvisioningSucceeded  2m58s                  nfs.csi.k8s.io_andromeda_61e4b876-324d-4f52-a5c3-f26047fbbc97  Successfully provisioned volume pvc-5676d353-4d46-49a2-b7ff-bdd4603d2c06

That’s it! You can now use this PVC to run stateful workloads on your MicroK8s cluster.

Common Issues

The NFS CSI controller and node pods are getting stuck in `Pending` state

Make sure that you specify --set kubeletDir=/var/snap/microk8s/common/var/lib/kubelet when installing the Helm chart.

I created the nfs-csi storage class, but cannot provision volumes

Double-check that you have specified the NFS server IP address and share path correctly. Also, make sure that your MicroK8s node can mount NFS shares. If you are running a cluster, all MicroK8s nodes should be allowed to mount NFS shares.

Provisioning new volumes fails, but I've done everything else correctly

Check the logs of the

containers in the controller and node pods, using the following commands:

microk8s kubectl logs --selector app=csi-nfs-controller -n kube-system -c nfs
microk8s kubectl logs --selector app=csi-nfs-node -n kube-system -c nfs

The logs should help with debugging any issues.

Instead of:

You probably wanted something like this:

echo '/srv/nfs,sync,no_subtree_check)' | sudo tee /etc/exports
1 Like

At final step during kubectl describe, I had a permission denied error. I resolved this by making sure my NFS server squash setting was set to “no mapping” instead of “squash all users to admin.”

1 Like

Thank you for your information. I have question about NFS PV.

Now the mounted storage’s owner is nobody:nobody (UID=65534) with permission 775.
And in the pod, I cannot change owner of the storage.
I hope to use non-root user for processes in the pod for security.
Resultingly, processes in pod don’t have permission to write on NFS storage.
For this case, is there any way to avoid this permission problem?
May I have your opinion for it?

Assume that I can not change NFS server side configuration (owner=nobody and permssion=775) for security. And pod side have to use non-root user for processes as well.

Thanks for the question. I’m going to talk to the rest of the MicroK8s team and I hope we can get back to you with some suggestions (which we can then add to the docs, so thanks!)

Hey @evilnick. I have a question regarding the use of CSI vs a static nfs PV for this tutorial.

I understand the NFS CSI offers dynamic PV provisioning, but if that’s not needed for our use case is it still recommended / supported to use a static nfs PV (as per the kubernetes docs here in microk8s? We have tried to use a static nfs PV with microk8s and it works as long as nfs-common in installed on our ubuntu host.

There are members on our team who think that using CSI is the only recommended/supported way of using NFS in microk8s because of this tutorial, and there’s no reasoning for the use of the CSI in this doc (which would be super helpful for the uninitiated like myself).

Thank you!

IMHO, though the static nfs still works, but i think the use of csi in the long term trumps the static nfs pv. Node migration/upgrade will certainly benefit from the use of csi.

However, as long as kubernetes supports the static nfs pv, MicroK8s will always have it. It is a certified Kubernetes distro.

Thanks for the response @balchua1, really appreciate it. We have a somewhat unique use case where we’re shipping an appliance and allow customers to add data by adding the connection details to external NFS shares at runtime. Only a single pod will mount the volume to read some data. So I think a CSI is overkill for this, but maybe I don’t totally understand what the advantages are.

Would it be possible to add more details about CSI in this tutorial? Or maybe offer 2 options like 1. Mount NFS using a PV/PVC 2. Use a CSI in production because you need X,Y and Z


Consider adding another troubleshooting item:

The StorageClass config in this document uses nfsvers=4.1 in the mountOptions. If you already have NFS storage configured and skip step one, this version should match your existing NFS storage system, which may use an older protocol version.

You can use the command rpcinfo <your_nfs_server_ip> to see what NFS versions your provider offers.

The StorageClass config must use a version your NFS provider offers, otherwise PersistentVolumeClaims will fail with the error mount.nfs: Protocol not supported.

I experienced this today when trying to connect my TRUENAS NFS with this storage class config, not realizing that my version of TRUENAS only supports NFS versions 2 and 3, but not 4. I had to change the config from nfsvers=4.1 to nfsvers=3 for the pvc to connect, mount, and provision successfully.

Hello @evilnick

I know this is a stupid question, however where is this installed on “2. Install the CSI driver for NFS”? I am assuming on one of the nodes (Master?) (One of the Workers?)

I have a 4 node cluster on Raspberry Pi 4’s and it is a Master w/3 Worker Nodes.

Thanks for your time and the great article,

1 Like

Hi - did you get it working yet? The driver is just deployed to a pod, it shouldn’t matter which node

Hello evilnick,

I appreciate your response, I am going to try it this weekend. I will let ya know the results.

Thank you again,

1 Like