Cluster information:
Kubernetes version: 1.21
Cloud being used: AWS
Installation method: Terraform/EKS
Host OS: Amazon Linux 2
Hi, this is my first post and I’m hoping that I’m not violating any community written/unwritten rule
there is a case in my hands right now. I have a deployment in my AWS EKS 1.21 cluster with 3 replicas. Under high traffic, this app can scale up to 15-20 pods. My problem is, sometimes when traffic is low, all pods can be scheduled on the same node, and when there is closing due to cluster autoscaler resizing, all pod replicas are closing and opening at the same time. I want to prevent this. But I don’t want to do NodeAffinity with a number due to control node count as well. What I am trying to achieve is to prevent lets say more than %33 of total deployment replicas cannot be scheduled on the same node. I did my research but I don’t find any component working with percentage.
Is there a possibility to accomplish this?
Any help/recommendation will be highly appreciated. Thank you very much.