When I use kubeadm init --config kubeadm-config.yaml
to initialize the master node, the apiserver remains unavailable
How should I troubleshoot this issue? Should I use journalctl -xfu kubelet.service
to investigate, or are there other methods?
Here is a part of the kubelet logs:
2月 27 00:18:09 devops1 kubelet[195090]: E0227 00:18:09.698312 195090 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/events/kube-controller-manager-node.1827ccc9f0e4f030\": dial tcp 172.16.1.19:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-node.1827ccc9f0e4f030 kube-system 9446 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-node,UID:c369da3e35962f7763cdf2bb546daed7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-node_kube-system(c369da3e35962f7763cdf2bb546daed7),Source:EventSource{Component:kubelet,Host:node,},FirstTimestamp:2025-02-26 23:49:08 +0800 CST,LastTimestamp:2025-02-27 00:13:02.385452175 +0800 CST m=+1567.061929467,Count:46,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:node,}"
2月 27 00:18:09 devops1 kubelet[195090]: E0227 00:18:09.698397 195090 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{kube-controller-manager-node.1827ce17bef7948f kube-system 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-node,UID:c369da3e35962f7763cdf2bb546daed7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-node_kube-system(c369da3e35962f7763cdf2bb546daed7),Source:EventSource{Component:kubelet,Host:node,},FirstTimestamp:2025-02-27 00:13:02.385452175 +0800 CST m=+1567.061929467,LastTimestamp:2025-02-27 00:13:02.385452175 +0800 CST m=+1567.061929467,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:node,}"
2月 27 00:18:09 devops1 kubelet[195090]: E0227 00:18:09.698720 195090 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/events/kube-proxy-hlh8h.1827cd1d0cbcb551\": dial tcp 172.16.1.19:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-proxy-hlh8h.1827cd1d0cbcb551 kube-system 9343 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-proxy-hlh8h,UID:3e2b8b14-0a30-4b38-9fa0-25221b5e6ad0,APIVersion:v1,ResourceVersion:8575,FieldPath:spec.containers{kube-proxy},},Reason:Killing,Message:Stopping container kube-proxy,Source:EventSource{Component:kubelet,Host:node,},FirstTimestamp:2025-02-26 23:55:05 +0800 CST,LastTimestamp:2025-02-27 00:13:02.385701813 +0800 CST m=+1567.062179114,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:node,}"
2月 27 00:18:11 devops1 kubelet[195090]: E0227 00:18:11.344837 195090 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.16.1.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/node?timeout=10s\": dial tcp 172.16.1.19:6443: connect: connection refused" interval="7s"
2月 27 00:18:12 devops1 kubelet[195090]: I0227 00:18:12.385022 195090 scope.go:117] "RemoveContainer" containerID="0a6b1ef4d154c8d365c858d6e41b74bf37af10a41d61a74ad0fd5317e169997e"
2月 27 00:18:12 devops1 kubelet[195090]: E0227 00:18:12.385193 195090 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-node_kube-system(3af838e0c48e816ee1104eb5b1c309c7)\"" pod="kube-system/kube-apiserver-node" podUID="3af838e0c48e816ee1104eb5b1c309c7"
2月 27 00:18:12 devops1 kubelet[195090]: E0227 00:18:12.406534 195090 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be984bafe938ac49056fd975a59b55e0e291b154d83457eaf9813e6f6586a599\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
2月 27 00:18:12 devops1 kubelet[195090]: E0227 00:18:12.406585 195090 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be984bafe938ac49056fd975a59b55e0e291b154d83457eaf9813e6f6586a599\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6766b7b6bb-hfsq6"
2月 27 00:18:12 devops1 kubelet[195090]: E0227 00:18:12.406602 195090 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be984bafe938ac49056fd975a59b55e0e291b154d83457eaf9813e6f6586a599\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6766b7b6bb-hfsq6"
2月 27 00:18:12 devops1 kubelet[195090]: E0227 00:18:12.406638 195090 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6766b7b6bb-hfsq6_kube-system(f77e9fe3-4dd6-46a6-9635-c48f3a2dea2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6766b7b6bb-hfsq6_kube-system(f77e9fe3-4dd6-46a6-9635-c48f3a2dea2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be984bafe938ac49056fd975a59b55e0e291b154d83457eaf9813e6f6586a599\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6766b7b6bb-hfsq6" podUID="f77e9fe3-4dd6-46a6-9635-c48f3a2dea2c"
2月 27 00:18:12 devops1 kubelet[195090]: I0227 00:18:12.407549 195090 scope.go:117] "RemoveContainer" containerID="f51dfef23bed4072efe83d32e601f9d39b3a64945041541d0dae3261cdb45e34"
2月 27 00:18:13 devops1 kubelet[195090]: I0227 00:18:13.401892 195090 scope.go:117] "RemoveContainer" containerID="79cc084bf6e4e3ef94cf2c7b8c5ad73a42b2c8e29ec29e6e8859343d13e2c891"
2月 27 00:18:13 devops1 kubelet[195090]: E0227 00:18:13.402001 195090 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-ds-d4sh9_kube-flannel(78f74489-557c-47f2-b2d8-1ba738abe735)\"" pod="kube-flannel/kube-flannel-ds-d4sh9" podUID="78f74489-557c-47f2-b2d8-1ba738abe735"
2月 27 00:18:13 devops1 kubelet[195090]: I0227 00:18:13.444335 195090 status_manager.go:890] "Failed to get status for pod" podUID="30894f0bf21bd627cf91a5937602adc4" pod="kube-system/kube-scheduler-node" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-node\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:13 devops1 kubelet[195090]: I0227 00:18:13.444980 195090 status_manager.go:890] "Failed to get status for pod" podUID="a80053ec65b8ebe11fae3a4e662d7321" pod="kube-system/etcd-node" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/etcd-node\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:13 devops1 kubelet[195090]: I0227 00:18:13.445257 195090 status_manager.go:890] "Failed to get status for pod" podUID="3af838e0c48e816ee1104eb5b1c309c7" pod="kube-system/kube-apiserver-node" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-node\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:13 devops1 kubelet[195090]: I0227 00:18:13.445438 195090 status_manager.go:890] "Failed to get status for pod" podUID="3e2b8b14-0a30-4b38-9fa0-25221b5e6ad0" pod="kube-system/kube-proxy-hlh8h" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hlh8h\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:13 devops1 kubelet[195090]: I0227 00:18:13.445667 195090 status_manager.go:890] "Failed to get status for pod" podUID="78f74489-557c-47f2-b2d8-1ba738abe735" pod="kube-flannel/kube-flannel-ds-d4sh9" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-d4sh9\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:13 devops1 kubelet[195090]: I0227 00:18:13.445877 195090 status_manager.go:890] "Failed to get status for pod" podUID="c369da3e35962f7763cdf2bb546daed7" pod="kube-system/kube-controller-manager-node" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-node\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:13 devops1 kubelet[195090]: E0227 00:18:13.789100 195090 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/events/kube-proxy-hlh8h.1827cd1d0cbcb551\": dial tcp 172.16.1.19:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-proxy-hlh8h.1827cd1d0cbcb551 kube-system 9343 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-proxy-hlh8h,UID:3e2b8b14-0a30-4b38-9fa0-25221b5e6ad0,APIVersion:v1,ResourceVersion:8575,FieldPath:spec.containers{kube-proxy},},Reason:Killing,Message:Stopping container kube-proxy,Source:EventSource{Component:kubelet,Host:node,},FirstTimestamp:2025-02-26 23:55:05 +0800 CST,LastTimestamp:2025-02-27 00:13:02.385701813 +0800 CST m=+1567.062179114,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:node,}"
2月 27 00:18:13 devops1 kubelet[195090]: W0227 00:18:13.797519 195090 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.16.1.19:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=9414": dial tcp 172.16.1.19:6443: connect: connection refused
2月 27 00:18:13 devops1 kubelet[195090]: E0227 00:18:13.797568 195090 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.16.1.19:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=9414\": dial tcp 172.16.1.19:6443: connect: connection refused" logger="UnhandledError"
2月 27 00:18:14 devops1 kubelet[195090]: I0227 00:18:14.401789 195090 scope.go:117] "RemoveContainer" containerID="a374cf0e117bc0559ed9976153208026f330b4002e0cab476bedb8fa7818930e"
2月 27 00:18:14 devops1 kubelet[195090]: E0227 00:18:14.401884 195090 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-node_kube-system(30894f0bf21bd627cf91a5937602adc4)\"" pod="kube-system/kube-scheduler-node" podUID="30894f0bf21bd627cf91a5937602adc4"
2月 27 00:18:14 devops1 kubelet[195090]: I0227 00:18:14.447164 195090 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="553b68012b8f80d3af26b736d632232c324f58dc64364f14c9a264770390d4c3"
2月 27 00:18:14 devops1 kubelet[195090]: I0227 00:18:14.447180 195090 scope.go:117] "RemoveContainer" containerID="f51dfef23bed4072efe83d32e601f9d39b3a64945041541d0dae3261cdb45e34"
2月 27 00:18:14 devops1 kubelet[195090]: I0227 00:18:14.447595 195090 status_manager.go:890] "Failed to get status for pod" podUID="78f74489-557c-47f2-b2d8-1ba738abe735" pod="kube-flannel/kube-flannel-ds-d4sh9" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-d4sh9\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:14 devops1 kubelet[195090]: I0227 00:18:14.447834 195090 status_manager.go:890] "Failed to get status for pod" podUID="c369da3e35962f7763cdf2bb546daed7" pod="kube-system/kube-controller-manager-node" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-node\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:14 devops1 kubelet[195090]: I0227 00:18:14.448038 195090 status_manager.go:890] "Failed to get status for pod" podUID="30894f0bf21bd627cf91a5937602adc4" pod="kube-system/kube-scheduler-node" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-node\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:14 devops1 kubelet[195090]: I0227 00:18:14.448418 195090 status_manager.go:890] "Failed to get status for pod" podUID="a80053ec65b8ebe11fae3a4e662d7321" pod="kube-system/etcd-node" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/etcd-node\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:14 devops1 kubelet[195090]: I0227 00:18:14.448581 195090 status_manager.go:890] "Failed to get status for pod" podUID="3af838e0c48e816ee1104eb5b1c309c7" pod="kube-system/kube-apiserver-node" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-node\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:14 devops1 kubelet[195090]: I0227 00:18:14.448751 195090 status_manager.go:890] "Failed to get status for pod" podUID="3e2b8b14-0a30-4b38-9fa0-25221b5e6ad0" pod="kube-system/kube-proxy-hlh8h" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hlh8h\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:14 devops1 kubelet[195090]: E0227 00:18:14.540044 195090 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hlh8h_kube-system(3e2b8b14-0a30-4b38-9fa0-25221b5e6ad0)\"" pod="kube-system/kube-proxy-hlh8h" podUID="3e2b8b14-0a30-4b38-9fa0-25221b5e6ad0"
2月 27 00:18:15 devops1 kubelet[195090]: I0227 00:18:15.385254 195090 status_manager.go:890] "Failed to get status for pod" podUID="3e2b8b14-0a30-4b38-9fa0-25221b5e6ad0" pod="kube-system/kube-proxy-hlh8h" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hlh8h\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:15 devops1 kubelet[195090]: I0227 00:18:15.385499 195090 status_manager.go:890] "Failed to get status for pod" podUID="78f74489-557c-47f2-b2d8-1ba738abe735" pod="kube-flannel/kube-flannel-ds-d4sh9" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-d4sh9\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:15 devops1 kubelet[195090]: I0227 00:18:15.385665 195090 status_manager.go:890] "Failed to get status for pod" podUID="c369da3e35962f7763cdf2bb546daed7" pod="kube-system/kube-controller-manager-node" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-node\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:15 devops1 kubelet[195090]: I0227 00:18:15.385792 195090 status_manager.go:890] "Failed to get status for pod" podUID="30894f0bf21bd627cf91a5937602adc4" pod="kube-system/kube-scheduler-node" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-node\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:15 devops1 kubelet[195090]: I0227 00:18:15.386003 195090 status_manager.go:890] "Failed to get status for pod" podUID="a80053ec65b8ebe11fae3a4e662d7321" pod="kube-system/etcd-node" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/etcd-node\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:15 devops1 kubelet[195090]: I0227 00:18:15.386222 195090 status_manager.go:890] "Failed to get status for pod" podUID="3af838e0c48e816ee1104eb5b1c309c7" pod="kube-system/kube-apiserver-node" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-node\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:15 devops1 kubelet[195090]: E0227 00:18:15.448759 195090 kubelet_node_status.go:549] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-02-26T16:18:15Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-26T16:18:15Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-26T16:18:15Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-26T16:18:15Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"node\": Patch \"https://172.16.1.19:6443/api/v1/nodes/node/status?timeout=10s\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:15 devops1 kubelet[195090]: E0227 00:18:15.448899 195090 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"node\": Get \"https://172.16.1.19:6443/api/v1/nodes/node?timeout=10s\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:15 devops1 kubelet[195090]: E0227 00:18:15.449003 195090 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"node\": Get \"https://172.16.1.19:6443/api/v1/nodes/node?timeout=10s\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:15 devops1 kubelet[195090]: E0227 00:18:15.449158 195090 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"node\": Get \"https://172.16.1.19:6443/api/v1/nodes/node?timeout=10s\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:15 devops1 kubelet[195090]: E0227 00:18:15.449352 195090 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"node\": Get \"https://172.16.1.19:6443/api/v1/nodes/node?timeout=10s\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:15 devops1 kubelet[195090]: E0227 00:18:15.449367 195090 kubelet_node_status.go:536] "Unable to update node status" err="update node status exceeds retry count"
2月 27 00:18:15 devops1 kubelet[195090]: I0227 00:18:15.451035 195090 scope.go:117] "RemoveContainer" containerID="1509ba14cabf4edfe2eb598d5d8a334a364e9ed59c572ad5b4096fefc757f267"
2月 27 00:18:15 devops1 kubelet[195090]: E0227 00:18:15.451139 195090 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hlh8h_kube-system(3e2b8b14-0a30-4b38-9fa0-25221b5e6ad0)\"" pod="kube-system/kube-proxy-hlh8h" podUID="3e2b8b14-0a30-4b38-9fa0-25221b5e6ad0"
2月 27 00:18:15 devops1 kubelet[195090]: I0227 00:18:15.451349 195090 status_manager.go:890] "Failed to get status for pod" podUID="78f74489-557c-47f2-b2d8-1ba738abe735" pod="kube-flannel/kube-flannel-ds-d4sh9" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-d4sh9\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:15 devops1 kubelet[195090]: I0227 00:18:15.451489 195090 status_manager.go:890] "Failed to get status for pod" podUID="c369da3e35962f7763cdf2bb546daed7" pod="kube-system/kube-controller-manager-node" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-node\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:15 devops1 kubelet[195090]: I0227 00:18:15.451650 195090 status_manager.go:890] "Failed to get status for pod" podUID="30894f0bf21bd627cf91a5937602adc4" pod="kube-system/kube-scheduler-node" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-node\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:15 devops1 kubelet[195090]: I0227 00:18:15.451826 195090 status_manager.go:890] "Failed to get status for pod" podUID="a80053ec65b8ebe11fae3a4e662d7321" pod="kube-system/etcd-node" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/etcd-node\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:15 devops1 kubelet[195090]: I0227 00:18:15.451994 195090 status_manager.go:890] "Failed to get status for pod" podUID="3af838e0c48e816ee1104eb5b1c309c7" pod="kube-system/kube-apiserver-node" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-node\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:15 devops1 kubelet[195090]: I0227 00:18:15.452183 195090 status_manager.go:890] "Failed to get status for pod" podUID="3e2b8b14-0a30-4b38-9fa0-25221b5e6ad0" pod="kube-system/kube-proxy-hlh8h" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hlh8h\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:16 devops1 kubelet[195090]: W0227 00:18:16.164223 195090 reflector.go:569] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://172.16.1.19:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=9404": dial tcp 172.16.1.19:6443: connect: connection refused
2月 27 00:18:16 devops1 kubelet[195090]: E0227 00:18:16.164271 195090 reflector.go:166] "Unhandled Error" err="object-\"kube-flannel\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-flannel/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=9404\": dial tcp 172.16.1.19:6443: connect: connection refused" logger="UnhandledError"
2月 27 00:18:16 devops1 kubelet[195090]: I0227 00:18:16.410128 195090 scope.go:117] "RemoveContainer" containerID="ae19255fd8b119144579cd641ccedaae07426c2c97717dc3fde13bcee3e4d07d"
2月 27 00:18:16 devops1 kubelet[195090]: E0227 00:18:16.410241 195090 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-node_kube-system(c369da3e35962f7763cdf2bb546daed7)\"" pod="kube-system/kube-controller-manager-node" podUID="c369da3e35962f7763cdf2bb546daed7"
2月 27 00:18:17 devops1 kubelet[195090]: E0227 00:18:17.408612 195090 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"880cf9b77e636c99110e570176b80f40ba83bb1b097d77acef635b17cc095dc5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
2月 27 00:18:17 devops1 kubelet[195090]: E0227 00:18:17.408660 195090 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"880cf9b77e636c99110e570176b80f40ba83bb1b097d77acef635b17cc095dc5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6766b7b6bb-btztr"
2月 27 00:18:17 devops1 kubelet[195090]: E0227 00:18:17.408678 195090 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"880cf9b77e636c99110e570176b80f40ba83bb1b097d77acef635b17cc095dc5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6766b7b6bb-btztr"
2月 27 00:18:17 devops1 kubelet[195090]: E0227 00:18:17.408714 195090 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6766b7b6bb-btztr_kube-system(f0a6e415-1f2f-40e8-96ea-c7d452ab1f59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6766b7b6bb-btztr_kube-system(f0a6e415-1f2f-40e8-96ea-c7d452ab1f59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"880cf9b77e636c99110e570176b80f40ba83bb1b097d77acef635b17cc095dc5\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6766b7b6bb-btztr" podUID="f0a6e415-1f2f-40e8-96ea-c7d452ab1f59"
2月 27 00:18:18 devops1 kubelet[195090]: E0227 00:18:18.346185 195090 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.16.1.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/node?timeout=10s\": dial tcp 172.16.1.19:6443: connect: connection refused" interval="7s"
2月 27 00:18:22 devops1 kubelet[195090]: W0227 00:18:22.283762 195090 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://172.16.1.19:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=9409": dial tcp 172.16.1.19:6443: connect: connection refused
2月 27 00:18:22 devops1 kubelet[195090]: E0227 00:18:22.283824 195090 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=9409\": dial tcp 172.16.1.19:6443: connect: connection refused" logger="UnhandledError"
2月 27 00:18:23 devops1 kubelet[195090]: E0227 00:18:23.790239 195090 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/events/kube-proxy-hlh8h.1827cd1d0cbcb551\": dial tcp 172.16.1.19:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-proxy-hlh8h.1827cd1d0cbcb551 kube-system 9343 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-proxy-hlh8h,UID:3e2b8b14-0a30-4b38-9fa0-25221b5e6ad0,APIVersion:v1,ResourceVersion:8575,FieldPath:spec.containers{kube-proxy},},Reason:Killing,Message:Stopping container kube-proxy,Source:EventSource{Component:kubelet,Host:node,},FirstTimestamp:2025-02-26 23:55:05 +0800 CST,LastTimestamp:2025-02-27 00:13:02.385701813 +0800 CST m=+1567.062179114,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:node,}"
2月 27 00:18:24 devops1 kubelet[195090]: E0227 00:18:24.405497 195090 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd6459d43d23b019ffc655de6f4f9cdcbeb8ec1fef9455aa6b954c1e1f467443\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
2月 27 00:18:24 devops1 kubelet[195090]: E0227 00:18:24.405541 195090 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd6459d43d23b019ffc655de6f4f9cdcbeb8ec1fef9455aa6b954c1e1f467443\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6766b7b6bb-hfsq6"
2月 27 00:18:24 devops1 kubelet[195090]: E0227 00:18:24.405557 195090 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd6459d43d23b019ffc655de6f4f9cdcbeb8ec1fef9455aa6b954c1e1f467443\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6766b7b6bb-hfsq6"
2月 27 00:18:24 devops1 kubelet[195090]: E0227 00:18:24.405589 195090 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6766b7b6bb-hfsq6_kube-system(f77e9fe3-4dd6-46a6-9635-c48f3a2dea2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6766b7b6bb-hfsq6_kube-system(f77e9fe3-4dd6-46a6-9635-c48f3a2dea2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd6459d43d23b019ffc655de6f4f9cdcbeb8ec1fef9455aa6b954c1e1f467443\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6766b7b6bb-hfsq6" podUID="f77e9fe3-4dd6-46a6-9635-c48f3a2dea2c"
2月 27 00:18:25 devops1 kubelet[195090]: E0227 00:18:25.347033 195090 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.16.1.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/node?timeout=10s\": dial tcp 172.16.1.19:6443: connect: connection refused" interval="7s"
2月 27 00:18:25 devops1 kubelet[195090]: I0227 00:18:25.384642 195090 status_manager.go:890] "Failed to get status for pod" podUID="3af838e0c48e816ee1104eb5b1c309c7" pod="kube-system/kube-apiserver-node" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-node\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:25 devops1 kubelet[195090]: I0227 00:18:25.384867 195090 status_manager.go:890] "Failed to get status for pod" podUID="3e2b8b14-0a30-4b38-9fa0-25221b5e6ad0" pod="kube-system/kube-proxy-hlh8h" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/kube-proxy-hlh8h\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:25 devops1 kubelet[195090]: I0227 00:18:25.385061 195090 status_manager.go:890] "Failed to get status for pod" podUID="78f74489-557c-47f2-b2d8-1ba738abe735" pod="kube-flannel/kube-flannel-ds-d4sh9" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-d4sh9\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:25 devops1 kubelet[195090]: I0227 00:18:25.385258 195090 status_manager.go:890] "Failed to get status for pod" podUID="c369da3e35962f7763cdf2bb546daed7" pod="kube-system/kube-controller-manager-node" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-node\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:25 devops1 kubelet[195090]: I0227 00:18:25.385462 195090 status_manager.go:890] "Failed to get status for pod" podUID="30894f0bf21bd627cf91a5937602adc4" pod="kube-system/kube-scheduler-node" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-node\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:25 devops1 kubelet[195090]: I0227 00:18:25.385639 195090 status_manager.go:890] "Failed to get status for pod" podUID="a80053ec65b8ebe11fae3a4e662d7321" pod="kube-system/etcd-node" err="Get \"https://172.16.1.19:6443/api/v1/namespaces/kube-system/pods/etcd-node\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:25 devops1 kubelet[195090]: E0227 00:18:25.547896 195090 kubelet_node_status.go:549] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-02-26T16:18:25Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-26T16:18:25Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-26T16:18:25Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-26T16:18:25Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"node\": Patch \"https://172.16.1.19:6443/api/v1/nodes/node/status?timeout=10s\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:25 devops1 kubelet[195090]: E0227 00:18:25.548055 195090 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"node\": Get \"https://172.16.1.19:6443/api/v1/nodes/node?timeout=10s\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:25 devops1 kubelet[195090]: E0227 00:18:25.548211 195090 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"node\": Get \"https://172.16.1.19:6443/api/v1/nodes/node?timeout=10s\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:25 devops1 kubelet[195090]: E0227 00:18:25.548336 195090 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"node\": Get \"https://172.16.1.19:6443/api/v1/nodes/node?timeout=10s\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:25 devops1 kubelet[195090]: E0227 00:18:25.548477 195090 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"node\": Get \"https://172.16.1.19:6443/api/v1/nodes/node?timeout=10s\": dial tcp 172.16.1.19:6443: connect: connection refused"
2月 27 00:18:25 devops1 kubelet[195090]: E0227 00:18:25.548492 195090 kubelet_node_status.go:536] "Unable to update node status" err="update node status exceeds retry count"
2月 27 00:18:26 devops1 kubelet[195090]: I0227 00:18:26.385363 195090 scope.go:117] "RemoveContainer" containerID="0a6b1ef4d154c8d365c858d6e41b74bf37af10a41d61a74ad0fd5317e169997e"
2月 27 00:18:26 devops1 kubelet[195090]: I0227 00:18:26.385432 195090 scope.go:117] "RemoveContainer" containerID="79cc084bf6e4e3ef94cf2c7b8c5ad73a42b2c8e29ec29e6e8859343d13e2c891"
2月 27 00:18:26 devops1 kubelet[195090]: E0227 00:18:26.385464 195090 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-node_kube-system(3af838e0c48e816ee1104eb5b1c309c7)\"" pod="kube-system/kube-apiserver-node" podUID="3af838e0c48e816ee1104eb5b1c309c7"
2月 27 00:18:26 devops1 kubelet[195090]: E0227 00:18:26.385517 195090 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-ds-d4sh9_kube-flannel(78f74489-557c-47f2-b2d8-1ba738abe735)\"" pod="kube-flannel/kube-flannel-ds-d4sh9" podUID="78f74489-557c-47f2-b2d8-1ba738abe735"
2月 27 00:18:28 devops1 kubelet[195090]: I0227 00:18:28.385533 195090 scope.go:117] "RemoveContainer" containerID="ae19255fd8b119144579cd641ccedaae07426c2c97717dc3fde13bcee3e4d07d"
2月 27 00:18:28 devops1 kubelet[195090]: E0227 00:18:28.385646 195090 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-node_kube-system(c369da3e35962f7763cdf2bb546daed7)\"" pod="kube-system/kube-controller-manager-node" podUID="c369da3e35962f7763cdf2bb546daed7"
2月 27 00:18:28 devops1 kubelet[195090]: E0227 00:18:28.405869 195090 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfac1e365d86050f1fb6e71879e9de8c2c443fe749e9272fdce70e0e79c3e91c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
2月 27 00:18:28 devops1 kubelet[195090]: E0227 00:18:28.405915 195090 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfac1e365d86050f1fb6e71879e9de8c2c443fe749e9272fdce70e0e79c3e91c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6766b7b6bb-btztr"
2月 27 00:18:28 devops1 kubelet[195090]: E0227 00:18:28.405932 195090 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfac1e365d86050f1fb6e71879e9de8c2c443fe749e9272fdce70e0e79c3e91c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6766b7b6bb-btztr"
2月 27 00:18:28 devops1 kubelet[195090]: E0227 00:18:28.405964 195090 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6766b7b6bb-btztr_kube-system(f0a6e415-1f2f-40e8-96ea-c7d452ab1f59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6766b7b6bb-btztr_kube-system(f0a6e415-1f2f-40e8-96ea-c7d452ab1f59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfac1e365d86050f1fb6e71879e9de8c2c443fe749e9272fdce70e0e79c3e91c\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6766b7b6bb-btztr" podUID="f0a6e415-1f2f-40e8-96ea-c7d452ab1f59"
2月 27 00:18:29 devops1 kubelet[195090]: I0227 00:18:29.385037 195090 scope.go:117] "RemoveContainer" containerID="a374cf0e117bc0559ed9976153208026f330b4002e0cab476bedb8fa7818930e"
2月 27 00:18:29 devops1 kubelet[195090]: E0227 00:18:29.385168 195090 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-node_kube-system(30894f0bf21bd627cf91a5937602adc4)\"" pod="kube-system/kube-scheduler-node" podUID="30894f0bf21bd627cf91a5937602adc4"
2月 27 00:18:29 devops1 kubelet[195090]: I0227 00:18:29.402532 195090 scope.go:117] "RemoveContainer" containerID="1509ba14cabf4edfe2eb598d5d8a334a364e9ed59c572ad5b4096fefc757f267"
2月 27 00:18:29 devops1 kubelet[195090]: E0227 00:18:29.402621 195090 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-hlh8h_kube-system(3e2b8b14-0a30-4b38-9fa0-25221b5e6ad0)\"" pod="kube-system/kube-proxy-hlh8h" podUID="3e2b8b14-0a30-4b38-9fa0-25221b5e6ad0"
2月 27 00:18:32 devops1 kubelet[195090]: E0227 00:18:32.347374 195090 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.16.1.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/node?timeout=10s\": dial tcp 172.16.1.19:6443: connect: connection refused" interval="7s"
Cluster information:
Kubernetes version: 1.32.2
Cloud being used: (put bare-metal if not on a public cloud)
Installation method: apt
Host OS: debian 12
CNI and version: default https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
CRI and version: containerd containerd.io 1.7.25
this is the kubeadm init config
apiVersion: kubeadm.k8s.io/v1beta4
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.16.1.19
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
imagePullSerial: true
name: node
taints: null
timeouts:
controlPlaneComponentHealthCheck: 4m0s
discovery: 5m0s
etcdAPICall: 2m0s
kubeletHealthCheck: 4m0s
kubernetesAPICall: 1m0s
tlsBootstrap: 5m0s
upgradeManifests: 5m0s
---
apiVersion: kubeadm.k8s.io/v1beta4
caCertificateValidityPeriod: 87600h0m0s
certificateValidityPeriod: 8760h0m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
encryptionAlgorithm: RSA-2048
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.32.2
networking:
podSubnet: "10.244.0.0/16"
proxy: {}
scheduler: {}
imageRepository: registry.aliyuncs.com/google_containers