I’m seeing a situation where multiple volume types, on multiple nodes, never are able to mount a volume into a pod. Thus, the pod stays in the containercreating state indefinetly. Has anyone else seen this error? This is in kube 1.9.6.
1m 3m 2 nginx-27vmt.15814608484a7826 Pod Warning FailedMount kubelet, 172.20.34.232 Unable to mount volumes for pod "nginx-27vmt_jay-nfs(df6076e4-2b51-11e9-82f8-fa163e297169)": timeout expired waiting for volumes to attach/mount for pod "jay-nfs"/"nginx-27vmt". list of unattached/unmounted volumes=[my-pv-storage]
@jeefy and I have seen that strictly with nfs on container linux based hosts. We did not find the root cause (haven’t had time to dive deep), but a workaround was to mount a share once on the host directly and after that all further mounts would work just fine. The quick solution for us was to add a systemd unit mount.
Cinder logs on Openstack indicate that the volume is mounted on a “/dev/vdi” on minion-1, but the service is running on minion-2.
To be able to recover the system we had to clear the Cinder DB Cache and restart.
Kind of lost the logs since we reinstalled the server
Any help would be appreciated
The exact error message which we are getting,
Feb 8 11:30:08 k8s-minion-1 kubelet: E0208 11:30:08.962255 21297 pod_workers.go:186] Error syncing pod 46a0d40f-2b94-11e9-8db8-fa163e315068 (“eis-scu-75fc888b5d-lstnb_edison-core(46a0d40f-2b94-11e9-8db8-fa163e315068)”), skipping: timeout expired waiting for volumes to attach or mount for pod “edison-core”/“eis-scu-75fc888b5d-lstnb”. list of unmounted volumes=[certs]. list of unattached volumes=[edison-core-vol certs default-token-qjdc5]
I was facing the same while trying out NFS, but for me it was my mistake. My IP address of NFS server was changed during testing and I was providing incorrect IP address of nfs-server to PersistentVolume: