Cluster information:
Kubernetes version: 1.15.7
Cloud being used: bare-metal
Installation method: kubeadm
Host OS: CentOS 7.7.1908
CNI and version: Flannel v0.12.0
CRI and version: Docker 18.09.3-3
I am stuck in a problem and I have no idea how to solve it.
I am trying to upgrade my cluster from 1.15.7 to 1.16.12. I am using kubeadm to do it.
I am running in the first node:
[root@k8s-1 ~]# yum install -y kubeadm-1.16.12-0 --disableexcludes=kubernetes
[root@k8s-1 ~]# kubectl drain k8s-1 --ignore-daemonsets --delete-local-data
[root@k8s-1 ~]# kubeadm upgrade plan
[root@k8s-1 ~]# kubeadm upgrade apply v1.16.12
So far so good.
Then when I try to upgrade my second node running:
[root@k8s-2 ~]# yum install -y kubeadm-1.16.12-0 --disableexcludes=kubernetes
[root@k8s-2 ~]# kubectl drain k8s-2 --ignore-daemonsets --delete-local-data
[root@k8s-2 ~]# kubeadm upgrade node
The command stuck (in k8s-2):
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s-1 hash: 7ed271c19d0e04f700b113e5a87ccf0b
Static pod: kube-apiserver-k8s-1 hash: 7ed271c19d0e04f700b113e5a87ccf0b
Static pod: kube-apiserver-k8s-1 hash: 7ed271c19d0e04f700b113e5a87ccf0b
Static pod: kube-apiserver-k8s-1 hash: 7ed271c19d0e04f700b113e5a87ccf0b
Static pod: kube-apiserver-k8s-1 hash: 7ed271c19d0e04f700b113e5a87ccf0b
Static pod: kube-apiserver-k8s-1 hash: 7ed271c19d0e04f700b113e5a87ccf0b
Static pod: kube-apiserver-k8s-1 hash: 7ed271c19d0e04f700b113e5a87ccf0b
Static pod: kube-apiserver-k8s-1 hash: 7ed271c19d0e04f700b113e5a87ccf0b
Static pod: kube-apiserver-k8s-1 hash: 7ed271c19d0e04f700b113e5a87ccf0b
Static pod: kube-apiserver-k8s-1 hash: 7ed271c19d0e04f700b113e5a87ccf0b
Static pod: kube-apiserver-k8s-1 hash: 7ed271c19d0e04f700b113e5a87ccf0b
Static pod: kube-apiserver-k8s-1 hash: 7ed271c19d0e04f700b113e5a87ccf0b
I already tried to upgrade the kubeadm package and run kubeadm upgrade apply 1.16.12
on the second node (named k8s-2) but still looks for k8s-1:
[root@k8s-2]# yum install -y kubeadm-1.16.12-0 --disableexcludes=kubernetes
Resolving Dependencies
--> Running transaction check
---> Package kubeadm.x86_64 0:1.15.7-0 will be updated
---> Package kubeadm.x86_64 0:1.16.12-0 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
==============================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================
Updating:
kubeadm x86_64 1.16.12-0 kubernetes 8.8 M
Transaction Summary
==============================================================================================================================
Upgrade 1 Package
Total download size: 8.8 M
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
3308ab5750ee65e6ae551612e2943b8bfeae5fb5b73384f073ef3ef11a452960-kubeadm-1.16.12-0.x86_64.rpm | 8.8 MB 00:00:03
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Updating : kubeadm-1.16.12-0.x86_64 1/2
Cleanup : kubeadm-1.15.7-0.x86_64 2/2
Verifying : kubeadm-1.16.12-0.x86_64 1/2
Verifying : kubeadm-1.15.7-0.x86_64 2/2
Updated:
kubeadm.x86_64 0:1.16.12-0
Complete!
[root@k8s-2]# kubeadm upgrade apply v1.16.12
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.16.12"
[upgrade/versions] Cluster version: v1.16.12
[upgrade/versions] kubeadm version: v1.16.12
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.16.12"...
=> HERE Static pod: kube-apiserver-k8s-1 hash: 7ed271c19d0e04f700b113e5a87ccf0b
=> HERE Static pod: kube-controller-manager-k8s-1 hash: 324ebb198e64f77117535f53aa93ee65
=> HERE Static pod: kube-scheduler-k8s-1 hash: bfbdfa7c2350fe451f33bed330d6d47f
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.16.12" is "3.3.15-0", but the current etcd version is "3.3.15". Won't downgrade etcd, instead just continue
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests246303559"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Current and new manifests of kube-apiserver are equal, skipping upgrade
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Current and new manifests of kube-controller-manager are equal, skipping upgrade
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Current and new manifests of kube-scheduler are equal, skipping upgrade
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons]: Migrating CoreDNS Corefile
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.16.12". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
[root@k8s-2]#
I can not understand why is looking for my first node when I try to upgrade my second node.
Anyone can give some help, please?
Thank you very much.