Good day. I’m working with the nfs-server-provisioner (via its Helm chart, https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner), and have encountered an interesting issue. It seems that it only “works” if I set the “nfs” StorageClass as the default sc in my cluster, i.e.:
helm install stable/nfs-server-provisioner --name k8s-nfs-server --set=storageClass.defaultClass=true
If I attempt a default installation, or enable persistence, the provisioner never appears to be configured correctly, e.g.:
helm install stable/nfs-server-provisioner --name k8s-nfs-server --set=storageClass.defaultClass=true,persistence.enabled=true,persistence.size=1Gi
In this case, the associated pod and PersistentVolumeClaim are eternally in a “pending” state, similar to this issue:
Is there a straightforward/obvious solution to this issue? In short, I’d like have a few pods have the ability to act as in-cluster NFS servers, and for the storage their use to persist across reboots of the entire cluster. It seems that something’s amiss with respect to setting up persistence for this provisioner.
I attempted to reach out to the maintainers via the recommended Slack channel (https://github.com/kubernetes-incubator/external-storage/tree/master/nfs), but can’t seem the users (@smarterclayton, @childsb) in the Slack directory at all.
Linked: Discussion from Kubernetes.io:
Linked: Slack Discussion:
Linked: Helm chart issue:
Linked: Core issue: https://github.com/kubernetes-incubator/external-storage/issues/1205