Version skew policy violations when upgrading a kubeadm HA cluster?

I was trying to upgrade one of my kubeadm HA clusters using this official doc as a reference. The upgrade was a success, but I’m quite confused how the whole process accommodates the version skew policy.

Say I’m upgrading a v1.22.8 kubeadm HA cluster (3 control plane nodes + 2 workers) to v1.23.5, here’s something I don’t quite understand:

  1. After kubeadm upgrade apply v1.23.5 at the first control plane node, all kube-proxy pods would be upgraded to v1.23.5 immediately. However, all kubelet instances are still v1.22.8. Doesn’t this violate the policy (quote: kube-proxy must be the same minor version as kubelet on the node)?
  2. Still after kubeadm upgrade apply v1.23.5 at the first control plane node, there’s a chance the new v1.23.5 kube-controller-manager would connect to the old v1.22.8 kube-apiserver. This violdates the policy too (quote: kube-controller-manager , kube-scheduler , and cloud-controller-manager must not be newer than the kube-apiserver instances they communicate with)?

For the first problem, I think it’s inevitable since kube-proxy is deployed as a DaemonSet? So I guess it’s OK to have kube-proxy one minor version newer than kubelet?

For the second problem, I don’t find any kubeadm flags to upgrade control plane components individually, so that I can upgrade all kube-apiserver first and than all kube-scheduler and kube-controller-manager later. But since there’re multiple kube-scheduler and kube-controller-manager instances in the cluster, at least one of them can work properly (and hopefully it gets elected too), so that’s OK too?