AWS: EBS & Zones

Asking for help? Comment out what you need so we can get more information to help you!

Cluster information:

Kubernetes version: 1.26
Cloud being used: (put bare-metal if not on a public cloud)
Installation method: EKS
Host OS:
CNI and version: Calico
CRI and version: Crio

You can format your yaml by highlighting it and pressing Ctrl-Shift-C, it will make your output easier to read.

EBS isn’t supported across multiple availability zones which means if a POD in zone A crashes there’s a possibility that it may get recreated in Zone B so will not be able to bind to it’s PVC and start. The EBS storage driver does not have this functionality (only the Karpenter add-on does). For example to schedule two redis pods on two nodes in Zone A and prevent them from running on the same nodes you could label two nodes with “zone=a” (increase # of replicas to 2, add zone label, add anti-affinity)

spec:
replicas: 2
template:
spec:
nodeSelector:
zone: a
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:

Which is similar to what’s mentioned here:

…but what if you have an sts with three replicas,one in each AZ which binds to a respective PV in each Zone… how do we make sure each replica stays in the zone it belongs to in case the node crashes ? Would this be where topology constraints come into the picture ?

Thx for any advice in advance !!

Looks like the EBS CSI driver handles this automatically if binding mode is “WaitForFirstConsumer”

As per the following:

“The EBS CSI Driver supports the WaitForFirstConsumer volume binding mode in Kubernetes. When using WaitForFirstConsumer binding mode the volume will automatically be created in the appropriate Availability Zone and with the appropriate topology. The WaitForFirstConsumer binding mode is recommended whenever possible for dynamic provisioning”