I am using Google Cloud and have set up a GKE cluster where I installed an NFS server. I created a Persistent Volume (PV) for the NFS server and then defined a Persistent Volume Claim (PVC) that connects to the NFS server using its IP address. Finally, I deployed a pod using the client-deployment.yaml
file to connect to the NFS server’s mount point and utilize the /data
volume.
We are experiencing an issue where the NFS server is running correctly, but the client pod defined in client-deployment.yaml is failing to start. Below are the details of each configuration file and the error logs, and i have attached the logs for nfs server
Error Logs
When attempting to run the client-deployment.yaml file, the pod fails to start with the following error logs:
Warning FailedMount 3m13s (x9 over 21m) kubelet MountVolume.SetUp failed for volume “nfs-volume” : mount failed: exit status 1
Mounting command: /home/kubernetes/containerized_mounter/mounter
Mounting arguments: mount -t nfs 34.118.233.16:/data /var/lib/kubelet/pods/6e602811-88b7-40d8-8e66-1e028dd0e919/volumes/kubernetes.ionfs/nfs-volumenfs/nfs-volume]
Output: Mount failed: mount failed: exit status 32
Mounting command: chroot
Mounting arguments: [/home/kubernetes/containerized_mounter/rootfs mount -t nfs 34.118.233.16:/data /var/lib/kubelet/pods/6e602811-88b7-40d8-8e66-1e028dd0e919/volumes/kubernetes.io
Output: mount.nfs: Connection timed out
Configuration Files
nfs-server.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs4-web
spec:
replicas: 1
selector:
matchLabels:
app: nfs-server
template:
metadata:
labels:
app: nfs-server
spec:
initContainers:
- name: init-modules
image: busybox
command: [“/bin/sh”, “-c”, “modprobe nfs && modprobe nfsd && modprobe rpcsec_gss_krb5”]
securityContext:
privileged: true
volumeMounts: - name: modules
mountPath: /lib/modules
readOnly: true
volumes: - name: data
gcePersistentDisk:
pdName: gce-nfs-disk
fsType: ext4 - name: modules
hostPath:
path: /lib/modules
containers: - name: server
image: erichough/nfs-server
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
volumeMounts: - mountPath: /data
name: data - mountPath: /lib/modules
name: modules
env: - name: NFS_DISABLE_VERSION_3
value: “yes” - name: NFS_LOG_LEVEL
value: DEBUG - name: NFS_SERVER_THREAD_COUNT
value: “6” - name: NFS_EXPORT_0
value: /data *(rw,sync,fsid=0,crossmnt,no_subtree_check,no_root_squash)
nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
server: 34.118.233.16
path: /data
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs-storage-class
nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClassName: nfs-storage-class
volumeName: nfs-pv
client-deployment.yaml
apiVersion: v1
kind: Pod
metadata:
name: nfs-test-pod
spec:
containers:
- name: nfs-test-container
image: busybox
command: [“sleep”, “3600”]
volumeMounts:- mountPath: /mnt/nfs
name: nfs-volume
volumes:
- mountPath: /mnt/nfs
- name: nfs-volume
nfs:
server: 34.118.226.147 # Replace with the IP address of your NFS server
path: /data
readOnly: false
Please investigate why the NFS server is not mounting on the client pod and provide any possible solutions or workarounds. Thank you!