I was trying to upgrade one of my kubeadm HA clusters using this official doc as a reference. The upgrade was a success, but I’m quite confused how the whole process accommodates the version skew policy.
Say I’m upgrading a v1.22.8 kubeadm HA cluster (3 control plane nodes + 2 workers) to v1.23.5, here’s something I don’t quite understand:
kubeadm upgrade apply v1.23.5at the first control plane node, all
kube-proxypods would be upgraded to v1.23.5 immediately. However, all
kubeletinstances are still v1.22.8. Doesn’t this violate the policy (quote:
kube-proxymust be the same minor version as
kubeleton the node)?
- Still after
kubeadm upgrade apply v1.23.5at the first control plane node, there’s a chance the new v1.23.5
kube-controller-managerwould connect to the old v1.22.8
kube-apiserver. This violdates the policy too (quote:
cloud-controller-managermust not be newer than the
kube-apiserverinstances they communicate with)?
For the first problem, I think it’s inevitable since
kube-proxy is deployed as a DaemonSet? So I guess it’s OK to have
kube-proxy one minor version newer than
For the second problem, I don’t find any kubeadm flags to upgrade control plane components individually, so that I can upgrade all
kube-apiserver first and than all
kube-controller-manager later. But since there’re multiple
kube-controller-manager instances in the cluster, at least one of them can work properly (and hopefully it gets elected too), so that’s OK too?