Automatic rolling update and rollback

I would appreciate if someone could clarify the Kubernetes default behaviour in regard to the following:

Kubernetes performs a rollout by sequentially increasing the number of the updated pods, as explained here. In an event of a failure (detected using e.g. the readiness or liveness probes), Kubernetes does not terminate the healthy pods yet, but rather retries upgrading the deployment. I’ve noticed that these retries are infinitive, i.e. Kubernetes tries to upgrade the deployment every x seconds, fails and then retries again. Is this the expected behaviour? How can you set e.g. a timeout period of an upgrade?

Secondly, I am interested into the following behaviour. Given a deployment foo that uses a config map bar, and assuming the deployment references the config map, I’d like to update the config map and upgrade the deployment. However, in an event of failure, I’d like to be able to scale the current healthy deployment using the previous values of the config map. Is this somehow possible, or does Kubernetes when scaling a Pod always seek within the config map for its values, which might not be identical to the one the other Pods are using?

Does Kubernetes support automatic rollouts of deployments upon dependent config map changes? For example, given a config map foo and a deployment bar, assuming bar specifies environmental variables using the config map foo, can Kubernetes automatically rollout the deployment every time the config map foo changes?


@dsafaric the best way to handle the updating of pods would be to use an annotation/label and bump it each time you need a new update to stick.