Kubernetes version: 1.20.x
Cloud being used: (put bare-metal if not on a public cloud) bare-metal
Installation method: Custom (Platform9 Managed Kubernetes)
Host OS: Ubuntu 20.04/Centos 7.8
CNI and version: Calico
CRI and version: Docker 19.x
I’ve hit a scenario where all my master nodes go down all at once and also, the kubelet gets restarted on all the worker nodes (this is mostly due to how the Platform9 K8s stack works).
I have noticed that some pods get recreated (usually the ones that are a part of a deployment) when the control plane comes back up. But this isn’t always the case since sometimes, none of the pods are recreated.
Although, everything comes back to a consistent state, is the behaviour of the cluster documented in such scenarios?