In-cluster NFS servers

Good day. I’m working with the nfs-server-provisioner (via its Helm chart, https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner), and have encountered an interesting issue. It seems that it only “works” if I set the “nfs” StorageClass as the default sc in my cluster, i.e.:
helm install stable/nfs-server-provisioner --name k8s-nfs-server --set=storageClass.defaultClass=true

If I attempt a default installation, or enable persistence, the provisioner never appears to be configured correctly, e.g.:
helm install stable/nfs-server-provisioner --name k8s-nfs-server --set=storageClass.defaultClass=true,persistence.enabled=true,persistence.size=1Gi

In this case, the associated pod and PersistentVolumeClaim are eternally in a “pending” state, similar to this issue:
https://github.com/helm/charts/issues/12626

Is there a straightforward/obvious solution to this issue? In short, I’d like have a few pods have the ability to act as in-cluster NFS servers, and for the storage their use to persist across reboots of the entire cluster. It seems that something’s amiss with respect to setting up persistence for this provisioner.

I attempted to reach out to the maintainers via the recommended Slack channel (https://github.com/kubernetes-incubator/external-storage/tree/master/nfs), but can’t seem the users (@smarterclayton, @childsb) in the Slack directory at all.

Linked: Discussion from Kubernetes.io: https://discuss.kubernetes.io/t/in-cluster-nfs-servers/7553
Linked: Slack Discussion: https://kubernetes.slack.com/archives/C09QZFCE5/p1565790548212000
Linked: Helm chart issue: https://github.com/helm/charts/issues/16323
Linked: Core issue: https://github.com/kubernetes-incubator/external-storage/issues/1205

Thank you.

Bump. Haven’t heard anything back from the Helm chart or project maintainers.

Project is effectively dead, with respect to applicable features.