Hello, I have a question about the new(ish) topology-aware volume scheduling feature.
This is what I (think I) understand: When a pod is configured with restrictions on where it can be deployed, the pvc is now aware of those restrictions and will provision the volume with the same restrictions, only once the pod is actually scheduled.
This is what I’m not sure of: Let’s say I have a multi-az cluster running. I deploy a stateful set with an ebs-backed persistent volume claim template. One of the pods (and its associated pv) gets deployed to a node in az A. Later, the pod dies for whatever reason. Since the ebs volume was provisioned in az A, will the scheduler be smart enough to restrict the replacement pod to be scheduled in az A, since that’s where its PV lives? Basically I just don’t want the scheduler to put the replacement pod on one of the nodes in a different az, since that pod would then be unable to mount its PV.
Thanks!
Cluster information:
Kubernetes version: 1.14.8
Cloud being used: AWS, Azure
Installation method: Both AKS/EKS and Kops
Host OS: Linux