Unable to attach or mount volumes: unmounted volumes - Container stuck in ContainerCreating state

I have a 3-nodes cluster all ubuntu 22.04 up and running and working fine until I started playing with storage.
I created another ubuntu box and added nfs-kernel-server, then nfs-common on all 3 nodes.

I tested from the 3 nodes with

sudo mount -t nfs4 c1-storage:/export/volumes /mnt/

and it works fine, however, I created a PersistentVolume as well as a PersistentVolumeClaim pointing to it, when I issue
kubectl get PersistentVolume pv-nfs-data
I get which seems OK

NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS   REASON   AGE
pv-nfs-data   10Gi       RWX            Retain           Bound    default/pvc-nfs-data                           133m

I issue
kubectl get PersistentVolumeClaim pvc-nfs-data

I get

NAME           STATUS   VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-nfs-data   Bound    pv-nfs-data   10Gi       RWX                           129m

However, when I run a Pod with the following spec

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-nfs-deployment
spec:  
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      volumes:
      - name: webcontent
        persistentVolumeClaim:
          claimName: pvc-nfs-data
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
        - name: webcontent
          mountPath: "/usr/share/nginx/html/web-app"
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-nfs-service
spec:
  selector:
    app: nginx
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80

but Pod is stuck, when I issue
kubectl get pods
nginx-nfs-deployment-b69b64f9b-ppvrl 0/1 ContainerCreating 0 34m

I do a describe, I see a timeout, aiting for the condition

 kubectl describe pod nginx-nfs-deployment-b69b64f9b-ppvrl
Name:             nginx-nfs-deployment-b69b64f9b-ppvrl
Namespace:        default
Priority:         0
Service Account:  default
Node:             kubernetes/192.168.1.31
Start Time:       Wed, 22 Mar 2023 17:11:29 +0000
Labels:           app=nginx
                  pod-template-hash=b69b64f9b
Annotations:      <none>
Status:           Pending
IP:
IPs:              <none>
Controlled By:    ReplicaSet/nginx-nfs-deployment-b69b64f9b
Containers:
  nginx:
    Container ID:
    Image:          nginx
    Image ID:
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /usr/share/nginx/html/web-app from webcontent (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n8wzr (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  webcontent:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-nfs-data
    ReadOnly:   false
  kube-api-access-n8wzr:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason       Age                  From               Message
  ----     ------       ----                 ----               -------
  Normal   Scheduled    35m                  default-scheduler  Successfully assigned default/nginx-nfs-deployment-b69b64f9b-ppvrl to kubernetes
  Warning  FailedMount  3m36s (x8 over 30m)  kubelet            Unable to attach or mount volumes: unmounted volumes=[webcontent], unattached volumes=[kube-api-access-n8wzr webcontent]: timed out waiting for the condition
  Warning  FailedMount  79s (x7 over 33m)    kubelet            Unable to attach or mount volumes: unmounted volumes=[webcontent], unattached volumes=[webcontent kube-api-access-n8wzr]: timed out waiting for the condition
  Warning  FailedMount  52s (x10 over 32m)   kubelet            MountVolume.SetUp failed for volume "pv-nfs-data" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 172.16.94.5:/export/volumes/pod /var/lib/kubelet/pods/574230b8-ce87-46df-8e0c-a7c243d7e228/volumes/kubernetes.io~nfs/pv-nfs-data
Output: mount.nfs: Connection timed out

So why ion commandline it works fine, storage is mounted according to Kubernets but Pod is stuck, knowing that firewalls are disabled on all boxes

Cluster information:

Kubernetes version: 1.26
Cloud being used: bare-metal
Installation method:
Host OS: ubuntu 2204
CNI and version: most recent
CRI and version: most recent

You can format your yaml by highlighting it and pressing Ctrl-Shift-C, it will make your output easier to read.

In the error message when I do describe pod, I see

Mounting arguments: -t nfs 172.16.94.5:/export/volumes/pod /var/lib/kubelet/pods/3365154c…

the IP of the nfs file server is 192.168.1.36 and not 172.16.94.5 , should I install something somewhere in order to make pods use the real IP? not sure as this is the first time I use storgae.
From where/Why this 172.16.94.5 is used by the POD in spite of the fact in the yaml for the PV it is indicated

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs-data
spec:
  accessModes:
    - ReadWriteMany
  capacity:
    storage: 10Gi
  persistentVolumeReclaimPolicy: Retain
  nfs:
    server: 192.168.1.36
    path: "/export/volumes/pod"

Also, to check if access is valid from pods, I created Praqma/Network-Multitool pod, issued the command telnet agianst port 2049, Pod was able to telnet, here is the ouput

Hello @eliassal,
Have you been able to find a solution for this problem or identify the issue?
I’m facing the same problem.
Thank you