Issue during Cluster autoscaling when nodes are out of capacity

If kubernetes cluster is autoscaled due to nodes running out of capacity when a new deployment is triggered, then all the pods are getting assigned in the same node even if there is anti-affinity with preferredScheduling pattern set in the deployment.
If it is production environment where the deployments does not happen very frequently. the pods are going to stay in stay in the same node for very long period of time.
This may cause complete failure of the application if that perticular node goes down.
We dont want to set anti-affnity pattern as requiredDuringSchedulingIgnoredDuringExecution because it may scale the cluster unnecessarily if the pods are horizontally scaled.
Is there a way to avoid pods scheduling in same node or rearrange the pods when there is capacity create in other nodes automatically.

You mentioned the deployments don’t happen frequently. During a deployment is when you would normally see redistribution happen. There’s no mechanism to automatically do this for you as far as I’m aware.

Something you could do is just make a CronJob with an inline bash script and service account to do this for you on a schedule. If you need a template to work from, I just pushed my boilerplate to this repo here.