I was trying to upgrade one of my kubeadm HA clusters using this official doc as a reference. The upgrade was a success, but I’m quite confused how the whole process accommodates the version skew policy.
Say I’m upgrading a v1.22.8 kubeadm HA cluster (3 control plane nodes + 2 workers) to v1.23.5, here’s something I don’t quite understand:
- After
kubeadm upgrade apply v1.23.5
at the first control plane node, allkube-proxy
pods would be upgraded to v1.23.5 immediately. However, allkubelet
instances are still v1.22.8. Doesn’t this violate the policy (quote:kube-proxy
must be the same minor version askubelet
on the node)? - Still after
kubeadm upgrade apply v1.23.5
at the first control plane node, there’s a chance the new v1.23.5kube-controller-manager
would connect to the old v1.22.8kube-apiserver
. This violdates the policy too (quote:kube-controller-manager
,kube-scheduler
, andcloud-controller-manager
must not be newer than thekube-apiserver
instances they communicate with)?
For the first problem, I think it’s inevitable since kube-proxy
is deployed as a DaemonSet? So I guess it’s OK to have kube-proxy
one minor version newer than kubelet
?
For the second problem, I don’t find any kubeadm flags to upgrade control plane components individually, so that I can upgrade all kube-apiserver
first and than all kube-scheduler
and kube-controller-manager
later. But since there’re multiple kube-scheduler
and kube-controller-manager
instances in the cluster, at least one of them can work properly (and hopefully it gets elected too), so that’s OK too?