Hi,
I have delete kube-proxy pod from my master nodes to get the new kube-proxy configmap value, when I reboot the node kube-proxy is coming up ?
how to start the kube-proxy?
# crictl ps
CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT POD ID
19376bdbc55eb c4d75af7e098e 17 minutes ago Running calico-node 2 b4fb351577dd4
24253bf076500 6be0dc1302e30 17 minutes ago Running kube-scheduler 4 182f9024a6957
57c5c9dec4ad2 3d174f00aa39e 17 minutes ago Running kube-apiserver 3 e549a2bad6a02
Cluster information:
Kubernetes version: v1.21.3
Cloud being used: (put bare-metal if not on a public cloud)
Installation method: bare-metal
Host OS: CentOS Stream 8
CNI and version: Calico
CRI and version: containerd
Its not evening creating the kube-proxy pod, with-out this pod not able find the log message.
Here is the PODs on the master node.
# crictl ps -a
CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT POD ID
d4389b5b3be50 bc2bb319a7038 4 minutes ago Exited kube-controller-manager 238 5ec664415efb0
4dbdeaa1297b9 6be0dc1302e30 3 hours ago Running kube-scheduler 9 c7e5a5e28203b
9d232faaebad8 6be0dc1302e30 5 hours ago Exited kube-scheduler 8 c7e5a5e28203b
b4240b7c173e2 c4d75af7e098e 10 hours ago Running calico-node 3 715fb9fb4a30e
8dfa5a8c0f5c5 5660150975fb8 10 hours ago Exited flexvol-driver 0 715fb9fb4a30e
0a41c75c433e4 5749e8b276f9b 10 hours ago Exited install-cni 0 715fb9fb4a30e
f92bcaea4bc7b 3d174f00aa39e 10 hours ago Running kube-apiserver 4 d825b96b38b02
bfe4b70418e4d 5749e8b276f9b 10 hours ago Exited upgrade-ipam 3 715fb9fb4a30e
19376bdbc55eb c4d75af7e098e 11 hours ago Exited calico-node 2 b4fb351577dd4
57c5c9dec4ad2 3d174f00aa39e 11 hours ago Exited kube-apiserver 3 e549a2bad6a02
Here is the log message from /var/log/message file
Aug 2 21:34:00 master01 kubelet[1149289]: I0802 21:34:00.680492 1149289 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/388bec62-8cc8-48b0-8a5e-d224714cc76e-kube-proxy\") pod \"388bec62-8cc8-48b0-8a5e-d224714cc76e\" (UID: \"388bec62-8cc8-48b0-8a5e-d224714cc76e\") "
Aug 2 21:34:00 master01 kubelet[1149289]: I0802 21:34:00.680526 1149289 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-proxy-token-xbkgt\" (UniqueName: \"kubernetes.io/secret/388bec62-8cc8-48b0-8a5e-d224714cc76e-kube-proxy-token-xbkgt\") pod \"388bec62-8cc8-48b0-8a5e-d224714cc76e\" (UID: \"388bec62-8cc8-48b0-8a5e-d224714cc76e\") "
Aug 2 21:34:00 master01 kubelet[1149289]: I0802 21:34:00.680492 1149289 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/388bec62-8cc8-48b0-8a5e-d224714cc76e-kube-proxy\") pod \"388bec62-8cc8-48b0-8a5e-d224714cc76e\" (UID: \"388bec62-8cc8-48b0-8a5e-d224714cc76e\") "
Aug 2 21:34:00 master01 kubelet[1149289]: I0802 21:34:00.680526 1149289 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-proxy-token-xbkgt\" (UniqueName: \"kubernetes.io/secret/388bec62-8cc8-48b0-8a5e-d224714cc76e-kube-proxy-token-xbkgt\") pod \"388bec62-8cc8-48b0-8a5e-d224714cc76e\" (UID: \"388bec62-8cc8-48b0-8a5e-d224714cc76e\") "
Aug 2 21:34:00 master01 kubelet[1149289]: W0802 21:34:00.681314 1149289 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/388bec62-8cc8-48b0-8a5e-d224714cc76e/volumes/kubernetes.io~configmap/kube-proxy: clearQuota called, but quotas disabled
Aug 2 21:34:00 master01 kubelet[1149289]: W0802 21:34:00.681314 1149289 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/388bec62-8cc8-48b0-8a5e-d224714cc76e/volumes/kubernetes.io~configmap/kube-proxy: clearQuota called, but quotas disabled
Aug 2 21:34:00 master01 kubelet[1149289]: I0802 21:34:00.681545 1149289 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/388bec62-8cc8-48b0-8a5e-d224714cc76e-kube-proxy" (OuterVolumeSpecName: "kube-proxy") pod "388bec62-8cc8-48b0-8a5e-d224714cc76e" (UID: "388bec62-8cc8-48b0-8a5e-d224714cc76e"). InnerVolumeSpecName "kube-proxy". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Aug 2 21:34:00 master01 kubelet[1149289]: I0802 21:34:00.681545 1149289 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/388bec62-8cc8-48b0-8a5e-d224714cc76e-kube-proxy" (OuterVolumeSpecName: "kube-proxy") pod "388bec62-8cc8-48b0-8a5e-d224714cc76e" (UID: "388bec62-8cc8-48b0-8a5e-d224714cc76e"). InnerVolumeSpecName "kube-proxy". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Aug 2 21:34:00 master01 kubelet[1149289]: I0802 21:34:00.694412 1149289 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/388bec62-8cc8-48b0-8a5e-d224714cc76e-kube-proxy-token-xbkgt" (OuterVolumeSpecName: "kube-proxy-token-xbkgt") pod "388bec62-8cc8-48b0-8a5e-d224714cc76e" (UID: "388bec62-8cc8-48b0-8a5e-d224714cc76e"). InnerVolumeSpecName "kube-proxy-token-xbkgt". PluginName "kubernetes.io/secret", VolumeGidValue ""
Aug 2 21:34:00 master01 kubelet[1149289]: I0802 21:34:00.694412 1149289 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/388bec62-8cc8-48b0-8a5e-d224714cc76e-kube-proxy-token-xbkgt" (OuterVolumeSpecName: "kube-proxy-token-xbkgt") pod "388bec62-8cc8-48b0-8a5e-d224714cc76e" (UID: "388bec62-8cc8-48b0-8a5e-d224714cc76e"). InnerVolumeSpecName "kube-proxy-token-xbkgt". PluginName "kubernetes.io/secret", VolumeGidValue ""
Aug 2 21:34:00 master01 kubelet[1149289]: I0802 21:34:00.781611 1149289 reconciler.go:319] "Volume detached for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/388bec62-8cc8-48b0-8a5e-d224714cc76e-kube-proxy\") on node \"master01.grangeinsurance.com\" DevicePath \"\""
Aug 2 21:34:00 master01 kubelet[1149289]: I0802 21:34:00.781611 1149289 reconciler.go:319] "Volume detached for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/388bec62-8cc8-48b0-8a5e-d224714cc76e-kube-proxy\") on node \"master01.grangeinsurance.com\" DevicePath \"\""
Aug 2 21:34:00 master01 kubelet[1149289]: I0802 21:34:00.781669 1149289 reconciler.go:319] "Volume detached for volume \"kube-proxy-token-xbkgt\" (UniqueName: \"kubernetes.io/secret/388bec62-8cc8-48b0-8a5e-d224714cc76e-kube-proxy-token-xbkgt\") on node \"master01.grangeinsurance.com\" DevicePath \"\""
Aug 2 21:34:00 master01 kubelet[1149289]: I0802 21:34:00.781669 1149289 reconciler.go:319] "Volume detached for volume \"kube-proxy-token-xbkgt\" (UniqueName: \"kubernetes.io/secret/388bec62-8cc8-48b0-8a5e-d224714cc76e-kube-proxy-token-xbkgt\") on node \"master01.grangeinsurance.com\" DevicePath \"\""
Aug 2 21:36:02 master01 systemd[1]: Unmounting /var/lib/kubelet/pods/388bec62-8cc8-48b0-8a5e-d224714cc76e/volumes/kubernetes.io~secret/kube-proxy-token-xbkgt...
Aug 2 21:36:02 master01 umount[1682365]: umount: /var/lib/kubelet/pods/388bec62-8cc8-48b0-8a5e-d224714cc76e/volumes/kubernetes.io~secret/kube-proxy-token-xbkgt: no mount point specified.
Aug 2 21:36:02 master01 systemd[1]: Unmounted /var/lib/kubelet/pods/388bec62-8cc8-48b0-8a5e-d224714cc76e/volumes/kubernetes.io~secret/kube-proxy-token-xbkgt.
Aug 2 21:36:02 master01 systemd[1]: Unmounting /var/lib/kubelet/pods/388bec62-8cc8-48b0-8a5e-d224714cc76e/volumes/kubernetes.io~secret/kube-proxy-token-xbkgt...
Aug 2 21:36:02 master01 umount[1682365]: umount: /var/lib/kubelet/pods/388bec62-8cc8-48b0-8a5e-d224714cc76e/volumes/kubernetes.io~secret/kube-proxy-token-xbkgt: no mount point specified.
Aug 2 21:36:02 master01 systemd[1]: Unmounted /var/lib/kubelet/pods/388bec62-8cc8-48b0-8a5e-d224714cc76e/volumes/kubernetes.io~secret/kube-proxy-token-xbkgt.
for kubeadm upgrade giving this message
kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0803 08:27:13.571595 405794 configset.go:77] Warning: No kubeproxy.config.k8s.io/v1alpha1 config is loaded. Continuing without it: configmaps "kube-proxy" is forbidden: User "system:node:master01" cannot get resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'master01' and this object
[preflight] Running pre-flight checks
but Kubernetes shows 7 kube-proxy
is running, but master node didn’t have kube-proxy pod
# kg ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
calico-node 7 7 7 7 7 kubernetes.io/os=linux 2d
kube-proxy 7 7 7 7 7 kubernetes.io/os=linux 65d
Due to ClusterCIDR IP change kube-controller-manager was in CrashLoopBackOff status. once I updated the old IP in ClusterCIDR, kube-controller-manager pod started, then kube-proxy came up.