In this how-to we will explain how to provision NFS mounts as Kubernetes Persistent Volumes on MicroK8s. If you run into difficulties, please see the troubleshooting section at the end!
Setup an NFS server
Caution: This section will show you how to configure a simple NFS server on Ubuntu for the purpose of this tutorial. This is not a production-grade NFS setup.
If you don’t have a suitable NFS server already, you can simply create one on a local machine with the following commands on Ubuntu:
sudo apt-get install nfs-kernel-server
Create a directory to be used for NFS:
sudo mkdir -p /srv/nfs
sudo chown nobody:nogroup /srv/nfs
sudo chmod 0777 /srv/nfs
Edit the /etc/exports
file. Make sure that the IP addresses of all your MicroK8s nodes are able to mount this share. For example, to allow all IP addresses in the 10.0.0.0/24
subnet:
sudo mv /etc/exports /etc/exports.bak
echo '/srv/nfs 10.0.0.0/24(rw,sync,no_subtree_check)' | sudo tee /etc/exports
Finally, restart the NFS server:
sudo systemctl restart nfs-kernel-server
For other OSes, follow specific documentation.
Install the CSI driver for NFS
We will use the upstream NFS CSI driver. First, we will deploy the NFS provisioner using the official Helm chart.
Enable the Helm3 addon (if not already enabled) and add the repository for the NFS CSI driver:
microk8s enable helm3
microk8s helm3 repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts
microk8s helm3 repo update
Then, install the Helm chart under the kube-system
namespace with:
microk8s helm3 install csi-driver-nfs csi-driver-nfs/csi-driver-nfs \
--namespace kube-system \
--set kubeletDir=/var/snap/microk8s/common/var/lib/kubelet
After deploying the Helm chart, wait for the CSI controller and node pods to come up using the following kubectl
command …
microk8s kubectl wait pod --selector app.kubernetes.io/name=csi-driver-nfs --for condition=ready --namespace kube-system
… which, once successful, will produce output similar to:
pod/csi-nfs-controller-67bd588cc6-7vvn7 condition met
pod/csi-nfs-node-qw8rg condition met
At this point, you should also be able to list the available CSI drivers in your Kubernetes cluster …
microk8s kubectl get csidrivers
… and see nfs.csi.k8s.io
in the list:
NAME ATTACHREQUIRED PODINFOONMOUNT STORAGECAPACITY TOKENREQUESTS REQUIRESREPUBLISH MODES AGE
nfs.csi.k8s.io false false false <unset> false Persistent 39m
Create a StorageClass for NFS
Next, we will need to create a Kubernetes Storage Class that uses the nfs.csi.k8s.io
CSI driver. Assuming you have configured an NFS share /srv/nfs
and the address of your NFS server is 10.0.0.42
, create the following file:
# sc-nfs.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
server: 10.0.0.42
share: /srv/nfs
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
- hard
- nfsvers=4.1
Note: The last line of the above YAML indicates a specific version of NFS. This should match the version of the NFS server being used - if you are using an existing service please check which version it uses and adjust accordingly.
Then apply it on your MicroK8s cluster with:
microk8s kubectl apply -f - < sc-nfs.yaml
Create a new PVC
The final step is to create a new PersistentVolumeClaim using the nfs-csi
storage class. This is as simple as specifying storageClassName: nfs-csi
in the PVC definition, for example:
# pvc-nfs.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
storageClassName: nfs-csi
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 5Gi
Then create the PVC with:
microk8s kubectl apply -f - < pvc-nfs.yaml
If everything has been configured correctly, you should be able to check the PVC…
microk8s kubectl describe pvc my-pvc
… and see that a volume was provisioned successfully:
Name: my-pvc
Namespace: default
StorageClass: nfs-csi
Status: Bound
Volume: pvc-5676d353-4d46-49a2-b7ff-bdd4603d2c06
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: nfs.csi.k8s.io
volume.kubernetes.io/storage-provisioner: nfs.csi.k8s.io
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 5Gi
Access Modes: RWO
VolumeMode: Filesystem
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 2m59s (x2 over 2m59s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "nfs.csi.k8s.io" or manually created by system administrator
Normal Provisioning 2m58s (x2 over 2m59s) nfs.csi.k8s.io_andromeda_61e4b876-324d-4f52-a5c3-f26047fbbc97 External provisioner is provisioning volume for claim "default/my-pvc"
Normal ProvisioningSucceeded 2m58s nfs.csi.k8s.io_andromeda_61e4b876-324d-4f52-a5c3-f26047fbbc97 Successfully provisioned volume pvc-5676d353-4d46-49a2-b7ff-bdd4603d2c06
That’s it! You can now use this PVC to run stateful workloads on your MicroK8s cluster.
Common Issues
The NFS CSI controller and node pods are getting stuck in `Pending` state
Make sure that you specify --set kubeletDir=/var/snap/microk8s/common/var/lib/kubelet
when installing the Helm chart.
I created the nfs-csi storage class, but cannot provision volumes
Double-check that you have specified the NFS server IP address and share path correctly. Also, make sure that your MicroK8s node can mount NFS shares. If you are running a cluster, all MicroK8s nodes should be allowed to mount NFS shares.
Provisioning new volumes fails, but I've done everything else correctly
Check the logs of the
nfscontainers in the controller and node pods, using the following commands:
microk8s kubectl logs --selector app=csi-nfs-controller -n kube-system -c nfs
microk8s kubectl logs --selector app=csi-nfs-node -n kube-system -c nfs
The logs should help with debugging any issues.