Problem with dynamic provisioning

Hello,
I installed a K8s using the kubeadm way on my own cluster 8VMs: 2 haproxy (LB with keepalived), 3 masters, 3 slaves and a local registry. I installed helm, and everything seems to work well

kube-system   coredns-66bff467f8-c6zdj                 1/1     Running   2          9d    10.244.4.6     slave2    <none>           <none>
kube-system   coredns-66bff467f8-wl8s4                 1/1     Running   2          9d    10.244.4.7     slave2    <none>           <none>
kube-system   etcd-master1                             1/1     Running   10         9d    192.168.0.81   master1   <none>           <none>
kube-system   etcd-master2                             1/1     Running   5          9d    192.168.0.82   master2   <none>           <none>
kube-system   etcd-master3                             1/1     Running   3          9d    192.168.0.83   master3   <none>           <none>
kube-system   kube-apiserver-master1                   1/1     Running   15         9d    192.168.0.81   master1   <none>           <none>
kube-system   kube-apiserver-master2                   1/1     Running   8          9d    192.168.0.82   master2   <none>           <none>
kube-system   kube-apiserver-master3                   1/1     Running   6          9d    192.168.0.83   master3   <none>           <none>
kube-system   kube-controller-manager-master1          1/1     Running   9          9d    192.168.0.81   master1   <none>           <none>
kube-system   kube-controller-manager-master2          1/1     Running   7          9d    192.168.0.82   master2   <none>           <none>
kube-system   kube-controller-manager-master3          1/1     Running   3          9d    192.168.0.83   master3   <none>           <none>
kube-system   kube-flannel-ds-amd64-692cr              1/1     Running   3          9d    192.168.0.84   slave1    <none>           <none>
kube-system   kube-flannel-ds-amd64-crgrx              1/1     Running   3          9d    192.168.0.86   slave3    <none>           <none>
kube-system   kube-flannel-ds-amd64-g7ctn              1/1     Running   4          9d    192.168.0.85   slave2    <none>           <none>
kube-system   kube-flannel-ds-amd64-j6vwg              1/1     Running   4          9d    192.168.0.83   master3   <none>           <none>
kube-system   kube-flannel-ds-amd64-rtktw              1/1     Running   7          9d    192.168.0.81   master1   <none>           <none>
kube-system   kube-flannel-ds-amd64-z6kn7              1/1     Running   6          9d    192.168.0.82   master2   <none>           <none>
kube-system   kube-proxy-7lgsb                         1/1     Running   3          9d    192.168.0.81   master1   <none>           <none>
kube-system   kube-proxy-9mxzh                         1/1     Running   3          9d    192.168.0.82   master2   <none>           <none>
kube-system   kube-proxy-b8xv9                         1/1     Running   2          9d    192.168.0.84   slave1    <none>           <none>
kube-system   kube-proxy-dl8fr                         1/1     Running   2          9d    192.168.0.83   master3   <none>           <none>
kube-system   kube-proxy-jmhqc                         1/1     Running   2          9d    192.168.0.85   slave2    <none>           <none>
kube-system   kube-proxy-vs4q4                         1/1     Running   2          9d    192.168.0.86   slave3    <none>           <none>
kube-system   kube-scheduler-master1                   1/1     Running   9          9d    192.168.0.81   master1   <none>           <none>
kube-system   kube-scheduler-master2                   1/1     Running   5          9d    192.168.0.82   master2   <none>           <none>
kube-system   kube-scheduler-master3                   1/1     Running   4          9d    192.168.0.83   master3   <none>           <none>
kube-system   tiller-deploy-5c4cfb859c-bglsb           1/1     Running   1          9d    10.244.5.3     slave3    <none>           <none>

The next step was to add persistent storage. I followed this very precise tutorial: https://blog.exxactcorp.com/deploying-dynamic-nfs-provisioning-in-kubernetes/.My external NFS server is working, and reachable from any node.

● nfs-server.service - NFS server and services
   Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor preset: enabled)
   Active: active (exited) since Fri 2020-04-24 16:52:00 CEST; 16h ago
 Main PID: 7058 (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 4701)
   Memory: 0B
   CGroup: /system.slice/nfs-server.service

The nfs-client-provisioner seems to be ok:

Name:         nfs-client-provisioner-9dfb69cdb-r29vg
Namespace:    default
Priority:     0
Node:         slave1/192.168.0.84
Start Time:   Fri, 24 Apr 2020 17:54:29 +0200
Labels:       app=nfs-client-provisioner
              pod-template-hash=9dfb69cdb
Annotations:  <none>
Status:       Running
IP:           10.244.3.6
IPs:
  IP:           10.244.3.6
Controlled By:  ReplicaSet/nfs-client-provisioner-9dfb69cdb
Containers:
  nfs-client-provisioner:
    Container ID:   docker://c3d88415320c067ea9d1a288f9a2cf02092ce475dafdbaaadc0d101c6346f956
    Image:          quay.io/external_storage/nfs-client-provisioner:latest
    Image ID:       docker-pullable://quay.io/external_storage/nfs-client-provisioner@sha256:022ea0b0d69834b652a4c53655d78642ae23f0324309097be874fb58d09d2919
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Fri, 24 Apr 2020 17:54:33 +0200
    Ready:          True
    Restart Count:  0
    Environment:
      PROVISIONER_NAME:  registry/nfsVol
      NFS_SERVER:        192.168.0.87
      NFS_PATH:          /nfsVol
    Mounts:
      /persistentvolumes from nfs-client-root (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from nfs-client-provisioner-token-5s49f (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  nfs-client-root:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    192.168.0.87
    Path:      /nfsVol
    ReadOnly:  false
  nfs-client-provisioner-token-5s49f:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  nfs-client-provisioner-token-5s49f
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

the “rbac.yaml” has been created:

run-nfs-client-provisioner                             ClusterRole/nfs-client-provisioner-runner                                          17h

Unfortunately, dynamic provisioning failed. Even if the tutorial explains that the provisioner will create the persistent volume, I created the storage class and the persistent volume claim, but the persistent volume claim remains pending. I tried to create the storage class as default and “not default”, with the same result.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: registry/nfsVol
parameters:
  archiveOnDelete: "false"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc1
spec:
  storageClassName: managed-nfs-storage
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 500Mi
NAME                         STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS          AGE
persistentvolumeclaim/pvc1   Pending                                      managed-nfs-storage   8h
NAME                                                        PROVISIONER       RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
storageclass.storage.k8s.io/managed-nfs-storage (default)   registry/nfsVol   Delete          Immediate           false                  8h

I missed probably something important.
Thanks for any hint.
Henri

Cluster information:

Kubernetes version:

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.1", GitCommit:"7879fc12a63337efff607952a323df90cdc7a335", GitTreeState:"clean", BuildDate:"2020-04-08T17:38:50Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.1", GitCommit:"7879fc12a63337efff607952a323df90cdc7a335", GitTreeState:"clean", BuildDate:"2020-04-08T17:30:47Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

Cloud being used: bare-metal
Installation method: kubeadm
Host OS: Ubuntu 18.04
CNI and version: flannel:v0.12.0-amd64 (?)
CRI and version: (?)

You can format your yaml by highlighting it and pressing Ctrl-Shift-C, it will make your output easier to read.

Solved my problem using this guide