NFS dynamic provisioning problems

I am following this guide and I am doing fine, but when I am making a pvc it does not show up in my nfs share, but it is created I have tested the nodes, and they do have RW permissions to the NFS share

so when I try to test the provisioning I get an error:

MountVolume.SetUp failed for volume "pvc-427e53bf-70bb-11e9-8990-525400a513ae" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/9b02aec2-70be-11e9-8990-525400a513ae/volumes/kubernetes.io~nfs/pvc-427e53bf-70bb-11e9-8990-525400a513ae --scope -- mount -t nfs 11.0.0.75:/var/nfsshare/default-pvc3-pvc-427e53bf-70bb-11e9-8990-525400a513ae /var/lib/kubelet/pods/9b02aec2-70be-11e9-8990-525400a513ae/volumes/kubernetes.io~nfs/pvc-427e53bf-70bb-11e9-8990-525400a513ae Output: Running scope as unit: run-r68af7a0af3c3404eb50d1e9baf90632d.scope mount.nfs: mounting 11.0.0.75:/var/nfsshare/default-pvc3-pvc-427e53bf-70bb-11e9-8990-525400a513ae failed, reason given by server: No such file or directory

These are my yamls:


apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc3
    spec:
      storageClassName: managed-nfs-storage
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 500Mi
    	  
    --------------------
    kind: ServiceAccount
    apiVersion: v1
    metadata:
      name: nfs-client-provisioner
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: nfs-client-provisioner-runner
    rules:
      - apiGroups: [""]
        resources: ["persistentvolumes"]
        verbs: ["get", "list", "watch", "create", "delete"]
      - apiGroups: [""]
        resources: ["persistentvolumeclaims"]
        verbs: ["get", "list", "watch", "update"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["storageclasses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["events"]
        verbs: ["create", "update", "patch"]
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: run-nfs-client-provisioner
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        namespace: default
    roleRef:
      kind: ClusterRole
      name: nfs-client-provisioner-runner
      apiGroup: rbac.authorization.k8s.io
    ---
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
    rules:
      - apiGroups: [""]
        resources: ["endpoints"]
        verbs: ["get", "list", "watch", "create", "update", "patch"]
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        # replace with namespace where provisioner is deployed
        namespace: default
    roleRef:
      kind: Role
      name: leader-locking-nfs-client-provisioner
      apiGroup: rbac.authorization.k8s.io
      -----------------
      apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: managed-nfs-storage
    provisioner: example.com/nfs
    parameters:
      archiveOnDelete: "false"
      ---------------
      kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
      name: nfs-client-provisioner
    spec:
      replicas: 1
      strategy:
        type: Recreate
      template:
        metadata:
          labels:
            app: nfs-client-provisioner
        spec:
          serviceAccountName: nfs-client-provisioner
          containers:
            - name: nfs-client-provisioner
              image: quay.io/external_storage/nfs-client-provisioner:latest
              volumeMounts:
                - name: nfs-client-root
                  mountPath: /persistentvolumes
              env:
                - name: PROVISIONER_NAME
                  value: example.com/nfs
                - name: NFS_SERVER
                  value: 11.0.0.75
                - name: NFS_PATH
                  value: /var/nfsshare
          volumes:
            - name: nfs-client-root
              nfs:
                server: 11.0.0.75
                path: /var/nfsshare
    -----------------------------
    apiVersion: v1
    kind: Pod
    metadata:
      name: busybox
    spec:
      volumes:
      - name: host-volume
        persistentVolumeClaim:
          claimName: pvc3
      containers:
      - image: busybox
        name: busybox
        command: ["/bin/sh"]
        args: ["-c", "sleep 600"]
        volumeMounts:
        - name: host-volume
          mountPath: /mydata

EDIT: Cleaning up
EDIT: Case Closed. I had a conficting provisioner

anyone know how to format posts on this forum?

It uses markdown, so you can put the output in a code fence:

```
kind: ConfigMap
apiVersion: v1
metadata:
  name: test
  namespace: default
```

At further digging, I have found that Kubernetes thinks it has made a pv, but at the specified path, there is none.

on the NFS server

Case closed! I found I had an old provisioner giving out conflicting information. After I deleted it, it all works :slight_smile:

1 Like