I was trying to upgrade one of my kubeadm HA clusters using this official doc as a reference. The upgrade was a success, but I’m quite confused how the whole process accommodates the version skew policy.
Say I’m upgrading a v1.22.8 kubeadm HA cluster (3 control plane nodes + 2 workers) to v1.23.5, here’s something I don’t quite understand:
- After
kubeadm upgrade apply v1.23.5at the first control plane node, allkube-proxypods would be upgraded to v1.23.5 immediately. However, allkubeletinstances are still v1.22.8. Doesn’t this violate the policy (quote:kube-proxymust be the same minor version askubeleton the node)? - Still after
kubeadm upgrade apply v1.23.5at the first control plane node, there’s a chance the new v1.23.5kube-controller-managerwould connect to the old v1.22.8kube-apiserver. This violdates the policy too (quote:kube-controller-manager,kube-scheduler, andcloud-controller-managermust not be newer than thekube-apiserverinstances they communicate with)?
For the first problem, I think it’s inevitable since kube-proxy is deployed as a DaemonSet? So I guess it’s OK to have kube-proxy one minor version newer than kubelet?
For the second problem, I don’t find any kubeadm flags to upgrade control plane components individually, so that I can upgrade all kube-apiserver first and than all kube-scheduler and kube-controller-manager later. But since there’re multiple kube-scheduler and kube-controller-manager instances in the cluster, at least one of them can work properly (and hopefully it gets elected too), so that’s OK too?