Multi-Attach error for volume "pvc "Volume is already exclusively attached to one node and can't be attached to another"

Kubernetes version1.22.2

Cloud Provider Vsphere version 6.7

Architecture:

3 Masters
15 Workers

What happened: One of the pods for some “unknown” reason went down, and when we try to lift him up, it couldn’t attach the existing PVC. This only happened to a specific pod, all the others didn’t have any kind of problem.

What did you expect to happen: Pods should dynamically assume PVCs

Validation: First step: The connection to Vsphere has been validated, and we have confirmed that the PVC exists. Second step: The Pod was restarted (Statefulset 1/1 replicas) to see if the pod would rise again and assume the pvc, but without success. Third step: Made a restart to the services (kube-controller, kube-apiserve, etc) Last step: All workers and masters were rebooted but without success, each time the pod was launched it had the same error Multi-Attach error for volume pvc.Volume is already exclusively attached to one node and can’t be attached to another""

When I delete a pod and try to recreate it, I get this warning: Multi-Attach error for volume “pvc…” The volume is already exclusively attached to a node and cannot be attached to another

Anything else we need to know: I have a cluster (3 master and 15 nodes)

Temporary resolution: Erase the existing PVC and launch the pod again to recreate the PVC. Since this is data, it is not the best solution to delete the existing PVC.

Multi-Attach error for volume “pvc-…” Volume is already exclusively attached to one node and can’t be attached to another

1 Like

I have the same problem. When I shut down a host, the running POD on it will be rebuilt, but the PV will not be mounted at first, and it will be restored after 6 minutes.
How can I skip the 6 minute wait time?

k8s: v1.22.7-k3s1
Longhorn: v1.2.4

Just simply, scale down your deployment and then scale up

Scale down:
[local@jump Nilesh]$ oc -n scale deployment --replicas=0

Scale up:
[local@jump Nilesh]$ oc -n scale deployment --replicas=1

Pods status
mysql-57 1/1 Running 8m38s

Did the “unknown reason” by any chance happen to be a node shutdown ? There is a fix in K8s 1.25 which fixes the non-graceful node shutdown (link).