How does preferredDuringScheduling pod anti-affinity work over time?

I’m using preferredDuringSchedulingIgnoredDuringExecution to spread 2 pods in a Deployment across 2 AZs and it works as expected.

I’m curious how this will work over time though? When I deploy a new version, obviously it can’t meet that requirement while scheduling the new pods - could it theoretically decide to schedule both new pods in one AZ, then terminate the old versions, leaving my deployment not spread across AZs?

Or will it do the smart thing and continue to balance it, leaving it still balanced after the old versions terminate?

What about if I lost an AZ (or even just enough worker nodes) causing it to schedule replacement in the remaining AZ - that would never be rebalanced back again, would it?