Extra pod spins up with my new single replica deployment

Weird behavior I’m noticing. Whenever I’m applying a deployment new or updated config, 1 additional pod spins up 3 seconds apart from the original and then scales back down.
I’ve noticed this after upgrading from 1.15.1x

Cluster information:

Kubernetes version: 1.16.13
Cloud being used: Azure
Installation method: AKS

That is the intended (default) behavior of a deployment. You can control it with maxSurge and maxUnavailable.

1 Like

thanks, this was the suspicion, the example in that link definitely confirmed the behavior:
“For example, when this value is set to 30%, the old ReplicaSet can be scaled down to 70% of desired Pods immediately when the rolling update starts. Once new Pods are ready, old ReplicaSet can be scaled down further, followed by scaling up the new ReplicaSet, ensuring that the total number of Pods available at all times during the update is at least 70% of the desired Pods.”