The connection to the server Master_IP:6443 was refused - did you specify the right host or port?

The connection to the server Master_IP:6443 was refused - did you specify the right host or port?

Hello All,

Has anyone of you come across such issues after installing k8s 1.30.1

I’m not sure what I missed here! I even reset the kubeadm many times

Has someone face such issues, any inputs are appreciated
Below CLI is used to initialize → kubeadm init --pod-network-cidr=10.10.0.0/16 --apiserver-advertise-address=Master_IP --cri-socket /run/containerd/containerd.sock

kubectl version
Client Version: v1.30.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
The connection to the server Master_IP:6443 was refused - did you specify the right host or port?

ot-master1 $ cat /etc/containerd/config.toml
[plugins.“io.containerd.grpc.v1.cri”.containerd.runtimes.runc]
[plugins.“io.containerd.grpc.v1.cri”.containerd.runtimes.runc.options]
SystemdCgroup = true
ot-master1 $

kubectl get nodes -v=10
I0601 11:38:36.649526 795967 loader.go:395] Config loaded from file: /root/.kube/config
I0601 11:38:36.650037 795967 round_trippers.go:466] curl -v -XGET -H “Accept: application/json;g=apidiscovery.k8s.io;v=v2;as=APIGroupDiscoveryList,application/json;g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList,application/json” -H “User-Agent: kubectl/v1.30.1 (linux/amd64) kubernetes/6911225” ‘https://Master_IP:6443/api?timeout=32s’
I0601 11:38:36.650302 795967 round_trippers.go:508] HTTP Trace: Dial to tcp:Master_IP:6443 failed: dial tcp Master_IP:6443: connect: connection refused
I0601 11:38:36.650330 795967 round_trippers.go:553] GET https://Master_IP:6443/api?timeout=32s in 0 milliseconds
I0601 11:38:36.650342 795967 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 0 ms Duration 0 ms
I0601 11:38:36.650352 795967 round_trippers.go:577] Response Headers:
E0601 11:38:36.650401 795967 memcache.go:265] couldn’t get current server API group list: Get https://Master_IP:6443/api?timeout=32s: dial tcp Master_IP:6443: connect: connection refused
I0601 11:38:36.650414 795967 cached_discovery.go:120] skipped caching discovery info due to Get https://Master_IP:6443/api?timeout=32s: dial tcp Master_IP:6443: connect: connection refused
I0601 11:38:36.650497 795967 round_trippers.go:466] curl -v -XGET -H “Accept: application/json;g=apidiscovery.k8s.io;v=v2;as=APIGroupDiscoveryList,application/json;g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList,application/json” -H “User-Agent: kubectl/v1.30.1 (linux/amd64) kubernetes/6911225” ‘https://Master_IP:6443/api?timeout=32s’
I0601 11:38:36.650676 795967 round_trippers.go:508] HTTP Trace: Dial to tcp:Master_IP:6443 failed: dial tcp Master_IP:6443: connect: connection refused
I0601 11:38:36.650714 795967 round_trippers.go:553] GET https://Master_IP:6443/api?timeout=32s in 0 milliseconds
I0601 11:38:36.650727 795967 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 0 ms Duration 0 ms
I0601 11:38:36.650738 795967 round_trippers.go:577] Response Headers:
E0601 11:38:36.650773 795967 memcache.go:265] couldn’t get current server API group list: Get https://Master_IP:6443/api?timeout=32s: dial tcp Master_IP:6443: connect: connection refused
I0601 11:38:36.651887 795967 cached_discovery.go:120] skipped caching discovery info due to Get https://Master_IP:6443/api?timeout=32s: dial tcp Master_IP:6443: connect: connection refused
I0601 11:38:36.651930 795967 shortcut.go:103] Error loading discovery information: Get https://Master_IP:6443/api?timeout=32s: dial tcp Master_IP:6443: connect: connection refused
I0601 11:38:36.652026 795967 round_trippers.go:466] curl -v -XGET -H “User-Agent: kubectl/v1.30.1 (linux/amd64) kubernetes/6911225” -H “Accept: application/json;g=apidiscovery.k8s.io;v=v2;as=APIGroupDiscoveryList,application/json;g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList,application/json” ‘https://Master_IP:6443/api?timeout=32s’
I0601 11:38:36.652159 795967 round_trippers.go:508] HTTP Trace: Dial to tcp:Master_IP:6443 failed: dial tcp Master_IP:6443: connect: connection refused
I0601 11:38:36.652178 795967 round_trippers.go:553] GET https://Master_IP:6443/api?timeout=32s in 0 milliseconds
I0601 11:38:36.652188 795967 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 0 ms Duration 0 ms
I0601 11:38:36.652198 795967 round_trippers.go:577] Response Headers:
E0601 11:38:36.652226 795967 memcache.go:265] couldn’t get current server API group list: Get https://Master_IP:6443/api?timeout=32s: dial tcp Master_IP:6443: connect: connection refused
I0601 11:38:36.652233 795967 cached_discovery.go:120] skipped caching discovery info due to Get https://Master_IP:6443/api?timeout=32s: dial tcp Master_IP:6443: connect: connection refused
I0601 11:38:36.652289 795967 round_trippers.go:466] curl -v -XGET -H “Accept: application/json;g=apidiscovery.k8s.io;v=v2;as=APIGroupDiscoveryList,application/json;g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList,application/json” -H “User-Agent: kubectl/v1.30.1 (linux/amd64) kubernetes/6911225” ‘https://Master_IP:6443/api?timeout=32s’
I0601 11:38:36.652412 795967 round_trippers.go:508] HTTP Trace: Dial to tcp:Master_IP:6443 failed: dial tcp Master_IP:6443: connect: connection refused
I0601 11:38:36.652447 795967 round_trippers.go:553] GET https://Master_IP:6443/api?timeout=32s in 0 milliseconds
I0601 11:38:36.652459 795967 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 0 ms Duration 0 ms
I0601 11:38:36.652468 795967 round_trippers.go:577] Response Headers:
E0601 11:38:36.652502 795967 memcache.go:265] couldn’t get current server API group list: Get https://Master_IP:6443/api?timeout=32s: dial tcp Master_IP:6443: connect: connection refused
I0601 11:38:36.653592 795967 cached_discovery.go:120] skipped caching discovery info due to Get https://Master_IP:6443/api?timeout=32s: dial tcp Master_IP:6443: connect: connection refused
I0601 11:38:36.653691 795967 round_trippers.go:466] curl -v -XGET -H “User-Agent: kubectl/v1.30.1 (linux/amd64) kubernetes/6911225” -H “Accept: application/json;g=apidiscovery.k8s.io;v=v2;as=APIGroupDiscoveryList,application/json;g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList,application/json” ‘https://Master_IP:6443/api?timeout=32s’
I0601 11:38:36.653841 795967 round_trippers.go:508] HTTP Trace: Dial to tcp:Master_IP:6443 failed: dial tcp Master_IP:6443: connect: connection refused
I0601 11:38:36.653872 795967 round_trippers.go:553] GET https://Master_IP:6443/api?timeout=32s in 0 milliseconds
I0601 11:38:36.653894 795967 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 0 ms Duration 0 ms
I0601 11:38:36.653903 795967 round_trippers.go:577] Response Headers:
E0601 11:38:36.653939 795967 memcache.go:265] couldn’t get current server API group list: Get https://Master_IP:6443/api?timeout=32s: dial tcp Master_IP:6443: connect: connection refused
I0601 11:38:36.653955 795967 cached_discovery.go:120] skipped caching discovery info due to Get https://Master_IP:6443/api?timeout=32s: dial tcp Master_IP:6443: connect: connection refused
I0601 11:38:36.653998 795967 helpers.go:264] Connection error: Get https://Master_IP:6443/api?timeout=32s: dial tcp Master_IP:6443: connect: connection refused
The connection to the server Master_IP:6443 was refused - did you specify the right host or port?

–>while checking the kubelet / even restarted many times, it didnt help
ot-master1 $ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Wed 2024-05-29 19:51:15 CDT; 4 days ago
Docs: Kubernetes Documentation | Kubernetes
Main PID: 261048 (kubelet)
Tasks: 17 (limit: 38415)
Memory: 51.5M
CPU: 1h 6min 1.020s
CGroup: /system.slice/kubelet.service
└─261048 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.9

Jun 02 22:06:52 ot-master1.internal.local kubelet[261048]: E0602 22:06:52.847762 261048 pod_workers.go:1298] “Error syncing pod, skipping” err=“failed to "StartContainer" for "kube-proxy" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-vtthh_kube-system(c53eac75-2bcd-480d-a6b6-cbf635217113)"” pod=“kube-system/kube-proxy-vtthh” po>
Jun 02 22:06:53 ot-master1.internal.local kubelet[261048]: E0602 22:06:53.915602 261048 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "ot-master1.internal.local": Get "https://Master_IP:6443/api/v1/nodes/ot-master1.internal.local?resourceVersion=0&timeout=10s": dial tcp Master_IP:6443: connect: connection refused”
Jun 02 22:06:53 ot-master1.internal.local kubelet[261048]: E0602 22:06:53.915811 261048 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "ot-master1.internal.local": Get "https://Master_IP:6443/api/v1/nodes/ot-master1.internal.local?timeout=10s": dial tcp Master_IP:6443: connect: connection refused”
Jun 02 22:06:53 ot-master1.internal.local kubelet[261048]: E0602 22:06:53.915923 261048 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "ot-master1.internal.local": Get "https://Master_IP:6443/api/v1/nodes/ot-master1.internal.local?timeout=10s": dial tcp Master_IP:6443: connect: connection refused”
Jun 02 22:06:53 ot-master1.internal.local kubelet[261048]: E0602 22:06:53.916140 261048 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "ot-master1.internal.local": Get "https://Master_IP:6443/api/v1/nodes/ot-master1.internal.local?timeout=10s": dial tcp Master_IP:6443: connect: connection refused”
Jun 02 22:06:53 ot-master1.internal.local kubelet[261048]: E0602 22:06:53.916329 261048 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "ot-master1.internal.local": Get "https://Master_IP:6443/api/v1/nodes/ot-master1.internal.local?timeout=10s": dial tcp Master_IP:6443: connect: connection refused”
Jun 02 22:06:53 ot-master1.internal.local kubelet[261048]: E0602 22:06:53.916348 261048 kubelet_node_status.go:531] “Unable to update node status” err=“update node status exceeds retry count”
Jun 02 22:06:53 ot-master1.internal.local kubelet[261048]: E0602 22:06:53.951079 261048 controller.go:145] “Failed to ensure lease exists, will retry” err=“Get "https://Master_IP:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ot-master1.internal.local?timeout=10s": dial tcp Master_IP:6443: connect: connection refused” interval=“7s”
Jun 02 22:06:54 ot-master1.internal.local kubelet[261048]: I0602 22:06:54.846684 261048 scope.go:117] “RemoveContainer” containerID=“5907d016cfb243f11019b26d636b13d3f1f1d4caa3fcfe464021f743d4087383”
Jun 02 22:06:54 ot-master1.internal.local kubelet[261048]: E0602 22:06:54.846915 261048 pod_workers.go:1298] “Error syncing pod, skipping” err=“failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-ot-master1.internal.local_kube-system(7e4870baad9d42588bd86d4db89bbc3a)"” pod="kube>
lines 1-23/23 (END)

ot-master1 $ crictl ps -a
WARN[0000] runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
WARN[0000] image connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
a983facbb5bc1 3861cfcd7c04c 1 second ago Running etcd 3903 8d5e41e38f052 etcd-ot-master1.internal.local
4e9b8fe986eb9 25a1387cdab82 21 seconds ago Exited kube-controller-manager 1144 28d528679ff60 kube-controller-manager-ot-master1.internal.local
e06e9abb29332 747097150317f 2 minutes ago Exited kube-proxy 947 fc5c6f557c735 kube-proxy-vtthh
a19c34b9cb64d 3861cfcd7c04c 3 minutes ago Exited etcd 3902 497039ab0fd10 etcd-ot-master1.internal.local
5907d016cfb24 a52dc94f0a912 4 minutes ago Exited kube-scheduler 3815 17c84d5c165cf kube-scheduler-ot-master1.internal.local
e32430f6860f9 91be940803172 5 minutes ago Exited kube-apiserver 3598 9145c833b443c kube-apiserver-ot-master1.internal.local
ot-master1 $

ot-master1 $ crictl logs e32430f6860f9
WARN[0000] runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
I0603 03:02:32.967767 1 options.go:221] external host was not specified, using Master_IP
I0603 03:02:32.968407 1 server.go:148] Version: v1.30.1
I0603 03:02:32.968488 1 server.go:150] “Golang settings” GOGC=“” GOMAXPROCS=“” GOTRACEBACK=“”
I0603 03:02:33.629837 1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
I0603 03:02:33.632482 1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
I0603 03:02:33.634263 1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0603 03:02:33.634277 1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
I0603 03:02:33.634470 1 instance.go:299] Using reconciler: lease
I0603 03:02:33.657645 1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
W0603 03:02:33.657663 1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
I0603 03:02:33.792328 1 handler.go:286] Adding GroupVersion v1 to ResourceManager
I0603 03:02:33.792720 1 instance.go:696] API group “internal.apiserver.k8s.io” is not enabled, skipping.
I0603 03:02:33.933622 1 instance.go:696] API group “storagemigration.k8s.io” is not enabled, skipping.
I0603 03:02:34.057807 1 instance.go:696] API group “resource.k8s.io” is not enabled, skipping.
I0603 03:02:34.070687 1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
W0603 03:02:34.070714 1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
W0603 03:02:34.070721 1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
I0603 03:02:34.071041 1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
W0603 03:02:34.071053 1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
I0603 03:02:34.071685 1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
I0603 03:02:34.072164 1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
W0603 03:02:34.072178 1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
W0603 03:02:34.072183 1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
I0603 03:02:34.073187 1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
W0603 03:02:34.073205 1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
I0603 03:02:34.073803 1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
W0603 03:02:34.073819 1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
W0603 03:02:34.073824 1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
I0603 03:02:34.074233 1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
W0603 03:02:34.074245 1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
W0603 03:02:34.074280 1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
I0603 03:02:34.074674 1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
I0603 03:02:34.075775 1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
W0603 03:02:34.075789 1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
W0603 03:02:34.075794 1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
I0603 03:02:34.076090 1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
W0603 03:02:34.076101 1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
W0603 03:02:34.076105 1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0603 03:02:34.076670 1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
W0603 03:02:34.076683 1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
I0603 03:02:34.077878 1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
W0603 03:02:34.077892 1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
W0603 03:02:34.077897 1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0603 03:02:34.078217 1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
W0603 03:02:34.078230 1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
W0603 03:02:34.078234 1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0603 03:02:34.079820 1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
W0603 03:02:34.079838 1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
W0603 03:02:34.079842 1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0603 03:02:34.080958 1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
I0603 03:02:34.082005 1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
W0603 03:02:34.082021 1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
W0603 03:02:34.082042 1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
I0603 03:02:34.085267 1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
W0603 03:02:34.085284 1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
W0603 03:02:34.085307 1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
I0603 03:02:34.087311 1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
W0603 03:02:34.087330 1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
W0603 03:02:34.087352 1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I0603 03:02:34.087879 1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
W0603 03:02:34.087902 1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
I0603 03:02:34.097441 1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
W0603 03:02:34.097455 1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
I0603 03:02:34.374456 1 dynamic_cafile_content.go:157] “Starting controller” name=“request-header::/etc/kubernetes/pki/front-proxy-ca.crt”
I0603 03:02:34.374501 1 dynamic_cafile_content.go:157] “Starting controller” name=“client-ca-bundle::/etc/kubernetes/pki/ca.crt”
I0603 03:02:34.374769 1 dynamic_serving_content.go:132] “Starting controller” name=“serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key”
I0603 03:02:34.375033 1 secure_serving.go:213] Serving securely on [::]:6443
I0603 03:02:34.375088 1 tlsconfig.go:240] “Starting DynamicServingCertificateController”
I0603 03:02:34.375151 1 controller.go:78] Starting OpenAPI AggregationController
I0603 03:02:34.375181 1 dynamic_serving_content.go:132] “Starting controller” name=“aggregator-proxy-cert::/etc/kubernetes/pki/front-proxy-client.crt::/etc/kubernetes/pki/front-proxy-client.key”
I0603 03:02:34.375195 1 available_controller.go:423] Starting AvailableConditionController
I0603 03:02:34.375205 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0603 03:02:34.375267 1 apf_controller.go:374] Starting API Priority and Fairness config controller
I0603 03:02:34.375273 1 controller.go:80] Starting OpenAPI V3 AggregationController
I0603 03:02:34.375355 1 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0603 03:02:34.375411 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0603 03:02:34.375610 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0603 03:02:34.375674 1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
I0603 03:02:34.375995 1 customresource_discovery_controller.go:289] Starting DiscoveryController
I0603 03:02:34.376151 1 controller.go:116] Starting legacy_token_tracking_controller
I0603 03:02:34.376216 1 shared_informer.go:313] Waiting for caches to sync for configmaps
I0603 03:02:34.376287 1 aggregator.go:163] waiting for initial CRD sync…
I0603 03:02:34.376392 1 system_namespaces_controller.go:67] Starting system namespaces controller
I0603 03:02:34.376474 1 gc_controller.go:78] Starting apiserver lease garbage collector
I0603 03:02:34.376555 1 dynamic_cafile_content.go:157] “Starting controller” name=“client-ca-bundle::/etc/kubernetes/pki/ca.crt”
I0603 03:02:34.376678 1 dynamic_cafile_content.go:157] “Starting controller” name=“request-header::/etc/kubernetes/pki/front-proxy-ca.crt”
I0603 03:02:34.376984 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0603 03:02:34.377017 1 controller.go:139] Starting OpenAPI controller
I0603 03:02:34.377123 1 crd_finalizer.go:266] Starting CRDFinalizer
I0603 03:02:34.377636 1 controller.go:87] Starting OpenAPI V3 controller
I0603 03:02:34.377665 1 naming_controller.go:291] Starting NamingConditionController
I0603 03:02:34.377694 1 establishing_controller.go:76] Starting EstablishingController
I0603 03:02:34.377713 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0603 03:02:34.390422 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0603 03:02:34.390440 1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
I0603 03:02:34.430041 1 shared_informer.go:320] Caches are synced for node_authorizer
I0603 03:02:34.433278 1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
I0603 03:02:34.433295 1 policy_source.go:224] refreshing policies
I0603 03:02:34.475605 1 apf_controller.go:379] Running API Priority and Fairness config worker
I0603 03:02:34.475631 1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
I0603 03:02:34.475642 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0603 03:02:34.475729 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0603 03:02:34.475861 1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
I0603 03:02:34.476382 1 shared_informer.go:320] Caches are synced for configmaps
I0603 03:02:34.479739 1 handler_discovery.go:447] Starting ResourceDiscoveryManager
I0603 03:02:34.490818 1 shared_informer.go:320] Caches are synced for crd-autoregister
I0603 03:02:34.490846 1 aggregator.go:165] initial CRD sync complete…
I0603 03:02:34.490852 1 autoregister_controller.go:141] Starting autoregister controller
I0603 03:02:34.490858 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0603 03:02:34.490863 1 cache.go:39] Caches are synced for autoregister controller
I0603 03:02:34.503950 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0603 03:02:35.378607 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W0603 03:02:35.585354 1 lease.go:265] Resetting endpoints for master service “kubernetes” to [Master_IP]
I0603 03:02:35.586406 1 controller.go:615] quota admission added evaluator for: endpoints
I0603 03:02:35.589649 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
E0603 03:02:57.793100 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:“context canceled”}: context canceled
E0603 03:02:57.793158 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
E0603 03:02:57.794763 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:“http: Handler timeout”}: http: Handler timeout
E0603 03:02:57.794797 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
E0603 03:02:57.796007 1 timeout.go:142] post-timeout activity - time-elapsed: 3.158512ms, GET “/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication” result:
E0603 03:02:57.838464 1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
E0603 03:02:57.838541 1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
E0603 03:02:57.838581 1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
W0603 03:02:57.839111 1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.839130 1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.839170 1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.839173 1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.839217 1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.839244 1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.839268 1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.839291 1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.839304 1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.839331 1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.839248 1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.839371 1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.839397 1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
E0603 03:02:57.839425 1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
W0603 03:02:57.839519 1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
E0603 03:02:57.839582 1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
W0603 03:02:57.839602 1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.839636 1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.839658 1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.839692 1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.839746 1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.839780 1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.839810 1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
E0603 03:02:57.839511 1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
W0603 03:02:57.839840 1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.839898 1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.839931 1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.839969 1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.840004 1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.840039 1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.840199 1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.840237 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.840251 1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.840269 1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.840285 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.840290 1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”
W0603 03:02:57.840300 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: “127.0.0.1:2379”, ServerName: “127.0.0.1:2379”, }. Err: connection error: desc = “transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused”

ot-master1 $ kubelet
I0602 22:11:51.959054 1075411 server.go:484] “Kubelet version” kubeletVersion=“v1.30.1”
I0602 22:11:51.959120 1075411 server.go:486] “Golang settings” GOGC=“” GOMAXPROCS=“” GOTRACEBACK=“”
I0602 22:11:51.959383 1075411 server.go:647] “Standalone mode, no API client”
I0602 22:11:51.973005 1075411 server.go:535] “No api server defined - no events will be sent to API server”
I0602 22:11:51.973034 1075411 server.go:742] “–cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /”
I0602 22:11:51.973352 1075411 container_manager_linux.go:265] “Container manager verified user specified cgroup-root exists” cgroupRoot=
I0602 22:11:51.973400 1075411 container_manager_linux.go:270] “Creating Container Manager object based on Node Config” nodeConfig={“NodeName”:“ot-master1.internal.local”,“RuntimeCgroupsName”:“”,“SystemCgroupsName”:“”,“KubeletCgroupsName”:“”,“KubeletOOMScoreAdj”:-999,“ContainerRuntime”:“”,“CgroupsPerQOS”:true,“CgroupRoot”:“/”,“CgroupDriver”:“cgroupfs”,“KubeletRootDir”:“/var/lib/kubelet”,“ProtectKernelDefaults”:false,“KubeReservedCgroupName”:“”,“SystemReservedCgroupName”:“”,“ReservedSystemCPUs”:{},“EnforceNodeAllocatable”:{“pods”:{}},“KubeReserved”:null,“SystemReserved”:null,“HardEvictionThresholds”:,“QOSReserved”:{},“CPUManagerPolicy”:“none”,“CPUManagerPolicyOptions”:null,“TopologyManagerScope”:“container”,“CPUManagerReconcilePeriod”:10000000000,“ExperimentalMemoryManagerPolicy”:“None”,“ExperimentalMemoryManagerReservedMemory”:null,“PodPidsLimit”:-1,“EnforceCPULimits”:true,“CPUCFSQuotaPeriod”:100000000,“TopologyManagerPolicy”:“none”,“TopologyManagerPolicyOptions”:null}
I0602 22:11:51.973617 1075411 topology_manager.go:138] “Creating topology manager with none policy”
I0602 22:11:51.973627 1075411 container_manager_linux.go:301] “Creating device plugin manager”
I0602 22:11:51.973674 1075411 state_mem.go:36] “Initialized new in-memory state store”
I0602 22:11:51.973759 1075411 kubelet.go:406] “Kubelet is running in standalone mode, will skip API server sync”
I0602 22:11:51.974718 1075411 kuberuntime_manager.go:261] “Container runtime initialized” containerRuntime=“containerd” version=“1.6.31” apiVersion=“v1”
I0602 22:11:51.974953 1075411 kubelet.go:815] “Not starting ClusterTrustBundle informer because we are in static kubelet mode”
I0602 22:11:51.974968 1075411 volume_host.go:77] “KubeClient is nil. Skip initialization of CSIDriverLister”
W0602 22:11:51.975186 1075411 csi_plugin.go:202] kubernetes.io/csi: kubeclient not set, assuming standalone kubelet
W0602 22:11:51.975203 1075411 csi_plugin.go:279] Skipping CSINode initialization, kubelet running in standalone mode
I0602 22:11:51.975500 1075411 server.go:1264] “Started kubelet”
I0602 22:11:51.975887 1075411 kubelet.go:1615] “No API server defined - no node status update will be sent”
I0602 22:11:51.976486 1075411 server.go:195] “Starting to listen read-only” address=“0.0.0.0” port=10255
I0602 22:11:51.976534 1075411 server.go:163] “Starting to listen” address=“0.0.0.0” port=10250
I0602 22:11:51.976608 1075411 ratelimit.go:55] “Setting rate limiting for endpoint” service=“podresources” qps=100 burstTokens=10
I0602 22:11:51.976865 1075411 server.go:227] “Starting to serve the podresources API” endpoint=“unix:/var/lib/kubelet/pod-resources/kubelet.sock”
E0602 22:11:51.976845 1075411 server.go:884] “Failed to start healthz server” err=“listen tcp 127.0.0.1:10248: bind: address already in use”
I0602 22:11:51.976993 1075411 fs_resource_analyzer.go:67] “Starting FS ResourceAnalyzer”
I0602 22:11:51.978852 1075411 volume_manager.go:291] “Starting Kubelet Volume Manager”
I0602 22:11:51.979572 1075411 reconciler.go:26] “Reconciler: start to sync state”
I0602 22:11:51.979824 1075411 desired_state_of_world_populator.go:149] “Desired state populator starts to run”
I0602 22:11:51.981397 1075411 server.go:455] “Adding debug handlers to kubelet server”
E0602 22:11:51.982522 1075411 server.go:180] “Failed to listen and serve” err=“listen tcp 0.0.0.0:10250: bind: address already in use”
ot-master1 $
ot-master1 $
ot-master1 $ ss -tunlp
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
udp UNCONN 0 0 0.0.0.0:53451 0.0.0.0:* users:((“avahi-daemon”,pid=775,fd=14))
udp UNCONN 0 0 0.0.0.0:5353 0.0.0.0:* users:((“avahi-daemon”,pid=775,fd=12))
udp UNCONN 0 0 127.0.0.53%lo:53 0.0.0.0:* users:((“systemd-resolve”,pid=661,fd=13))
udp UNCONN 0 0 0.0.0.0:631 0.0.0.0:* users:((“cups-browsed”,pid=896760,fd=7))
udp UNCONN 0 0 [::]:53308 [::]:* users:((“avahi-daemon”,pid=775,fd=15))
udp UNCONN 0 0 [::]:5353 [::]:* users:((“avahi-daemon”,pid=775,fd=13))
tcp LISTEN 0 4096 127.0.0.1:10248 0.0.0.0:* users:((“kubelet”,pid=1075113,fd=16))
tcp LISTEN 0 4096 127.0.0.1:43903 0.0.0.0:* users:((“containerd”,pid=882,fd=15))
tcp LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:* users:((“systemd-resolve”,pid=661,fd=14))
tcp LISTEN 0 128 127.0.0.1:631 0.0.0.0:* users:((“cupsd”,pid=896756,fd=7))
tcp LISTEN 0 4096 127.0.0.1:2381 0.0.0.0:* users:((“etcd”,pid=1076052,fd=14))
tcp LISTEN 0 4096 127.0.0.1:2379 0.0.0.0:* users:((“etcd”,pid=1076052,fd=8))
tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:((“sshd”,pid=901,fd=3))
tcp LISTEN 0 128 127.0.0.1:6010 0.0.0.0:* users:((“sshd”,pid=1074015,fd=9))
tcp LISTEN 0 4096 Master_IP:2379 0.0.0.0:* users:((“etcd”,pid=1076052,fd=9))
tcp LISTEN 0 4096 Master_IP:2380 0.0.0.0:* users:((“etcd”,pid=1076052,fd=7))
tcp LISTEN 0 4096 :10250 : users:((“kubelet”,pid=1075113,fd=21))
tcp LISTEN 0 128 [::1]:631 [::]:
users:((“cupsd”,pid=896756,fd=6))
tcp LISTEN 0 128 [::]:22 [::]:* users:((“sshd”,pid=901,fd=4))
tcp LISTEN 0 128 [::1]:6010 [::]:* users:((“sshd”,pid=1074015,fd=7))
ot-master1 $

ot-master1 $ journalctl -xeu kubelet
Jun 02 22:26:55 ot-master1.internal.local kubelet[1075113]: I0602 22:26:55.681091 1075113 scope.go:117] “RemoveContainer” containerID=“79e396037920c5f3ea83d1cd242b163ad3f9da8296030be20b870b2b12aba2c9”
Jun 02 22:26:55 ot-master1.internal.local kubelet[1075113]: E0602 22:26:55.681326 1075113 pod_workers.go:1298] “Error syncing pod, skipping” err=“failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-ot-master1.internal.local_kube-system(7e4870baad9d42588bd86d4db89bbc3a)"” pod="ku>
Jun 02 22:26:59 ot-master1.internal.local kubelet[1075113]: I0602 22:26:59.319447 1075113 status_manager.go:853] “Failed to get status for pod” podUID=“2789fb1c6262f964cfaed606401ed957” pod=“kube-system/kube-controller-manager-ot-master1.internal.local” err="Get "https://Master_IP:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ot-master1.internal.local": dia>
Jun 02 22:26:59 ot-master1.internal.local kubelet[1075113]: I0602 22:26:59.319744 1075113 status_manager.go:853] “Failed to get status for pod” podUID=“7e4870baad9d42588bd86d4db89bbc3a” pod=“kube-system/kube-scheduler-ot-master1.internal.local” err="Get "https://Master_IP:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ot-master1.internal.local": dial tcp Master_IP>
Jun 02 22:26:59 ot-master1.internal.local kubelet[1075113]: I0602 22:26:59.319943 1075113 status_manager.go:853] “Failed to get status for pod” podUID=“3e69149ae1c3de443deef213674d252b” pod=“kube-system/etcd-ot-master1.internal.local” err="Get "https://Master_IP:6443/api/v1/namespaces/kube-system/pods/etcd-ot-master1.internal.local": dial tcp Master_IP:6443: connect: conn>
Jun 02 22:26:59 ot-master1.internal.local kubelet[1075113]: I0602 22:26:59.320100 1075113 status_manager.go:853] “Failed to get status for pod” podUID=“011f38fe744a27abdd87dd3117499aae” pod=“kube-system/kube-apiserver-ot-master1.internal.local” err="Get "https://Master_IP:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ot-master1.internal.local": dial tcp Master_IP>
Jun 02 22:26:59 ot-master1.internal.local kubelet[1075113]: I0602 22:26:59.320258 1075113 status_manager.go:853] “Failed to get status for pod” podUID=“c53eac75-2bcd-480d-a6b6-cbf635217113” pod=“kube-system/kube-proxy-vtthh” err=“Get "https://Master_IP:6443/api/v1/namespaces/kube-system/pods/kube-proxy-vtthh": dial tcp Master_IP:6443: connect: connection refused”
Jun 02 22:26:59 ot-master1.internal.local kubelet[1075113]: E0602 22:26:59.587538 1075113 kubelet.go:2900] “Container runtime network not ready” networkReady=“NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized”
Jun 02 22:27:01 ot-master1.internal.local kubelet[1075113]: W0602 22:27:01.422695 1075113 reflector.go:547] pkg/kubelet/config/apiserver.go:66: failed to list *v1.Pod: Get “https://Master_IP:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dot-master1.internal.local&resourceVersion=24423”: dial tcp Master_IP:6443: connect: connection refused
Jun 02 22:27:01 ot-master1.internal.local kubelet[1075113]: E0602 22:27:01.422754 1075113 reflector.go:150] pkg/kubelet/config/apiserver.go:66: Failed to watch *v1.Pod: failed to list *v1.Pod: Get “https://Master_IP:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dot-master1.internal.local&resourceVersion=24423”: dial tcp Master_IP:6443: connect: connection refused
Jun 02 22:27:02 ot-master1.internal.local kubelet[1075113]: E0602 22:27:02.461430 1075113 controller.go:145] “Failed to ensure lease exists, will retry” err=“Get "https://Master_IP:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ot-master1.internal.local?timeout=10s": dial tcp Master_IP:6443: connect: connection refused” interval=“7s”
Jun 02 22:27:03 ot-master1.internal.local kubelet[1075113]: E0602 22:27:03.138317 1075113 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "ot-master1.internal.local": Get "https://Master_IP:6443/api/v1/nodes/ot-master1.internal.local?resourceVersion=0&timeout=10s": dial tcp Master_IP:6443: connect: connection refused”
Jun 02 22:27:03 ot-master1.internal.local kubelet[1075113]: E0602 22:27:03.138573 1075113 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "ot-master1.internal.local": Get "https://Master_IP:6443/api/v1/nodes/ot-master1.internal.local?timeout=10s": dial tcp Master_IP:6443: connect: connection refused”
Jun 02 22:27:03 ot-master1.internal.local kubelet[1075113]: E0602 22:27:03.138804 1075113 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "ot-master1.internal.local": Get "https://Master_IP:6443/api/v1/nodes/ot-master1.internal.local?timeout=10s": dial tcp Master_IP:6443: connect: connection refused”
Jun 02 22:27:03 ot-master1.internal.local kubelet[1075113]: E0602 22:27:03.138991 1075113 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "ot-master1.internal.local": Get "https://Master_IP:6443/api/v1/nodes/ot-master1.internal.local?timeout=10s": dial tcp Master_IP:6443: connect: connection refused”
Jun 02 22:27:03 ot-master1.internal.local kubelet[1075113]: E0602 22:27:03.139133 1075113 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "ot-master1.internal.local": Get "https://Master_IP:6443/api/v1/nodes/ot-master1.internal.local?timeout=10s": dial tcp Master_IP:6443: connect: connection refused”
Jun 02 22:27:03 ot-master1.internal.local kubelet[1075113]: E0602 22:27:03.139149 1075113 kubelet_node_status.go:531] “Unable to update node status” err=“update node status exceeds retry count”
Jun 02 22:27:04 ot-master1.internal.local kubelet[1075113]: I0602 22:27:04.318702 1075113 scope.go:117] “RemoveContainer” containerID=“dde249d03ce644d531bb178db30b7e1a84dc4d79f79cf5e0d5832f72b9d03884”
Jun 02 22:27:04 ot-master1.internal.local kubelet[1075113]: E0602 22:27:04.319085 1075113 pod_workers.go:1298] “Error syncing pod, skipping” err=“failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-ot-master1.internal.local_kube-system(011f38fe744a27abdd87dd3117499aae)"” pod="kub>
Jun 02 22:27:04 ot-master1.internal.local kubelet[1075113]: E0602 22:27:04.589337 1075113 kubelet.go:2900] “Container runtime network not ready” networkReady=“NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized”
Jun 02 22:27:06 ot-master1.internal.local kubelet[1075113]: I0602 22:27:06.336142 1075113 scope.go:117] “RemoveContainer” containerID=“5bae8493a82c4397a0812da2614c9f35604d52ea01a23d507a895566b49afe78”
Jun 02 22:27:06 ot-master1.internal.local kubelet[1075113]: E0602 22:27:06.336358 1075113 pod_workers.go:1298] “Error syncing pod, skipping” err=“failed to "StartContainer" for "kube-proxy" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-vtthh_kube-system(c53eac75-2bcd-480d-a6b6-cbf635217113)"” pod=“kube-system/kube-proxy-vtthh” p>
Jun 02 22:27:08 ot-master1.internal.local kubelet[1075113]: I0602 22:27:08.340190 1075113 scope.go:117] “RemoveContainer” containerID=“79e396037920c5f3ea83d1cd242b163ad3f9da8296030be20b870b2b12aba2c9”
Jun 02 22:27:08 ot-master1.internal.local kubelet[1075113]: E0602 22:27:08.340850 1075113 pod_workers.go:1298] “Error syncing pod, skipping” err=“failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-ot-master1.internal.local_kube-system(7e4870baad9d42588bd86d4db89bbc3a)"” pod="ku>
Jun 02 22:27:08 ot-master1.internal.local kubelet[1075113]: I0602 22:27:08.341142 1075113 scope.go:117] “RemoveContainer” containerID=“7aa45d44e92ea5d438b53120bb3ac6eed7734ba2c79cf3ca94fa5e81c38e9588”
Jun 02 22:27:08 ot-master1.internal.local kubelet[1075113]: E0602 22:27:08.341574 1075113 pod_workers.go:1298] “Error syncing pod, skipping” err="failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ot-master1.internal.local_kube-system(2789fb1c6262f964cf>
Jun 02 22:27:08 ot-master1.internal.local kubelet[1075113]: W0602 22:27:08.474124 1075113 reflector.go:547] object-“kube-system”/“kube-proxy”: failed to list *v1.ConfigMap: Get “https://Master_IP:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=24430”: dial tcp Master_IP:6443: connect: connection refused
Jun 02 22:27:08 ot-master1.internal.local kubelet[1075113]: E0602 22:27:08.474199 1075113 reflector.go:150] object-“kube-system”/“kube-proxy”: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get “https://Master_IP:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=24430”: dial tcp Master_IP:6443: connect: conn>
Jun 02 22:27:09 ot-master1.internal.local kubelet[1075113]: I0602 22:27:09.318875 1075113 status_manager.go:853] “Failed to get status for pod” podUID=“2789fb1c6262f964cfaed606401ed957” pod=“kube-system/kube-controller-manager-ot-master1.internal.local” err="Get "https://Master_IP:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ot-master1.internal.local": dia>
Jun 02 22:27:09 ot-master1.internal.local kubelet[1075113]: I0602 22:27:09.319015 1075113 status_manager.go:853] “Failed to get status for pod” podUID=“7e4870baad9d42588bd86d4db89bbc3a” pod=“kube-system/kube-scheduler-ot-master1.internal.local” err="Get "https://Master_IP:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ot-master1.internal.local": dial tcp Master_IP>
Jun 02 22:27:09 ot-master1.internal.local kubelet[1075113]: I0602 22:27:09.319107 1075113 status_manager.go:853] “Failed to get status for pod” podUID=“3e69149ae1c3de443deef213674d252b” pod=“kube-system/etcd-ot-master1.internal.local” err="Get "https://Master_IP:6443/api/v1/namespaces/kube-system/pods/etcd-ot-master1.internal.local": dial tcp Master_IP:6443: connect: conn>
Jun 02 22:27:09 ot-master1.internal.local kubelet[1075113]: I0602 22:27:09.319196 1075113 status_manager.go:853] “Failed to get status for pod” podUID=“011f38fe744a27abdd87dd3117499aae” pod=“kube-system/kube-apiserver-ot-master1.internal.local” err="Get "https://Master_IP:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ot-master1.internal.local": dial tcp Master_IP>
Jun 02 22:27:09 ot-master1.internal.local kubelet[1075113]: I0602 22:27:09.319281 1075113 status_manager.go:853] “Failed to get status for pod” podUID=“c53eac75-2bcd-480d-a6b6-cbf635217113” pod=“kube-system/kube-proxy-vtthh” err=“Get "https://Master_IP:6443/api/v1/namespaces/kube-system/pods/kube-proxy-vtthh": dial tcp Master_IP:6443: connect: connection refused”
Jun 02 22:27:09 ot-master1.internal.local kubelet[1075113]: E0602 22:27:09.462239 1075113 controller.go:145] “Failed to ensure lease exists, will retry” err=“Get "https://Master_IP:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ot-master1.internal.local?timeout=10s": dial tcp Master_IP:6443: connect: connection refused” interval=“7s”
Jun 02 22:27:09 ot-master1.internal.local kubelet[1075113]: E0602 22:27:09.590723 1075113 kubelet.go:2900] “Container runtime network not ready” networkReady=“NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized”
Jun 02 22:27:13 ot-master1.internal.local kubelet[1075113]: E0602 22:27:13.216042 1075113 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "ot-master1.internal.local": Get "https://Master_IP:6443/api/v1/nodes/ot-master1.internal.local?resourceVersion=0&timeout=10s": dial tcp Master_IP:6443: connect: connection refused”
Jun 02 22:27:13 ot-master1.internal.local kubelet[1075113]: E0602 22:27:13.216688 1075113 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "ot-master1.internal.local": Get "https://Master_IP:6443/api/v1/nodes/ot-master1.internal.local?timeout=10s": dial tcp Master_IP:6443: connect: connection refused”
Jun 02 22:27:13 ot-master1.internal.local kubelet[1075113]: E0602 22:27:13.216888 1075113 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "ot-master1.internal.local": Get "https://Master_IP:6443/api/v1/nodes/ot-master1.internal.local?timeout=10s": dial tcp Master_IP:6443: connect: connection refused”
Jun 02 22:27:13 ot-master1.internal.local kubelet[1075113]: E0602 22:27:13.217029 1075113 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "ot-master1.internal.local": Get "https://Master_IP:6443/api/v1/nodes/ot-master1.internal.local?timeout=10s": dial tcp Master_IP:6443: connect: connection refused”
Jun 02 22:27:13 ot-master1.internal.local kubelet[1075113]: E0602 22:27:13.217225 1075113 kubelet_node_status.go:544] “Error updating node status, will retry” err=“error getting node "ot-master1.internal.local": Get "https://Master_IP:6443/api/v1/nodes/ot-master1.internal.local?timeout=10s": dial tcp Master_IP:6443: connect: connection refused”
Jun 02 22:27:13 ot-master1.internal.local kubelet[1075113]: E0602 22:27:13.217247 1075113 kubelet_node_status.go:531] “Unable to update node status” err=“update node status exceeds retry count”
Jun 02 22:27:14 ot-master1.internal.local kubelet[1075113]: E0602 22:27:14.591699 1075113 kubelet.go:2900] “Container runtime network not ready” networkReady=“NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized”
Jun 02 22:27:16 ot-master1.internal.local kubelet[1075113]: I0602 22:27:16.319236 1075113 scope.go:117] “RemoveContainer” containerID=“dde249d03ce644d531bb178db30b7e1a84dc4d79f79cf5e0d5832f72b9d03884”
Jun 02 22:27:16 ot-master1.internal.local kubelet[1075113]: E0602 22:27:16.319606 1075113 pod_workers.go:1298] “Error syncing pod, skipping” err=“failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-ot-master1.internal.local_kube-system(011f38fe744a27abdd87dd3117499aae)"” pod="kub>
Jun 02 22:27:16 ot-master1.internal.local kubelet[1075113]: E0602 22:27:16.463079 1075113 controller.go:145] “Failed to ensure lease exists, will retry” err=“Get "https://Master_IP:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ot-master1.internal.local?timeout=10s": dial tcp Master_IP:6443: connect: connection refused” interval=“7s”
Jun 02 22:27:17 ot-master1.internal.local kubelet[1075113]: I0602 22:27:17.343777 1075113 scope.go:117] “RemoveContainer” containerID=“5bae8493a82c4397a0812da2614c9f35604d52ea01a23d507a895566b49afe78”
Jun 02 22:27:17 ot-master1.internal.local kubelet[1075113]: E0602 22:27:17.344204 1075113 pod_workers.go:1298] “Error syncing pod, skipping” err=“failed to "StartContainer" for "kube-proxy" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-vtthh_kube-system(c53eac75-2bcd-480d-a6b6-cbf635217113)"” pod=“kube-system/kube-proxy-vtthh” p>
Jun 02 22:27:19 ot-master1.internal.local kubelet[1075113]: I0602 22:27:19.321114 1075113 status_manager.go:853] “Failed to get status for pod” podUID=“7e4870baad9d42588bd86d4db89bbc3a” pod=“kube-system/kube-scheduler-ot-master1.internal.local” err="Get "https://Master_IP:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ot-master1.internal.local": dial tcp Master_IP>
Jun 02 22:27:19 ot-master1.internal.local kubelet[1075113]: I0602 22:27:19.321808 1075113 status_manager.go:853] “Failed to get status for pod” podUID=“3e69149ae1c3de443deef213674d252b” pod=“kube-system/etcd-ot-master1.internal.local” err="Get "https://Master_IP:6443/api/v1/namespaces/kube-system/pods/etcd-ot-master1.internal.local": dial tcp Master_IP:6443: connect: conn>
Jun 02 22:27:19 ot-master1.internal.local kubelet[1075113]: I0602 22:27:19.322088 1075113 status_manager.go:853] “Failed to get status for pod” podUID=“011f38fe744a27abdd87dd3117499aae” pod=“kube-system/kube-apiserver-ot-master1.internal.local” err="Get "https://Master_IP:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ot-master1.internal.local": dial tcp Master_IP>
Jun 02 22:27:19 ot-master1.internal.local kubelet[1075113]: I0602 22:27:19.322271 1075113 status_manager.go:853] “Failed to get status for pod” podUID=“c53eac75-2bcd-480d-a6b6-cbf635217113” pod=“kube-system/kube-proxy-vtthh” err=“Get "https://Master_IP:6443/api/v1/namespaces/kube-system/pods/kube-proxy-vtthh": dial tcp Master_IP:6443: connect: connection refused”
Jun 02 22:27:19 ot-master1.internal.local kubelet[1075113]: I0602 22:27:19.322525 1075113 status_manager.go:853] “Failed to get status for pod” podUID=“2789fb1c6262f964cfaed606401ed957” pod=“kube-system/kube-controller-manager-ot-master1.internal.local” err="Get "https://Master_IP:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ot-master1.internal.local": dia>
Jun 02 22:27:19 ot-master1.internal.local kubelet[1075113]: I0602 22:27:19.350902 1075113 scope.go:117] “RemoveContainer” containerID=“79e396037920c5f3ea83d1cd242b163ad3f9da8296030be20b870b2b12aba2c9”
Jun 02 22:27:19 ot-master1.internal.local kubelet[1075113]: E0602 22:27:19.351133 1075113 pod_workers.go:1298] “Error syncing pod, skipping” err=“failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-ot-master1.internal.local_kube-system(7e4870baad9d42588bd86d4db89bbc3a)"” pod="ku>
Jun 02 22:27:19 ot-master1.internal.local kubelet[1075113]: W0602 22:27:19.377251 1075113 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get “https://Master_IP:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=24424”: dial tcp Master_IP:6443: connect: connection refused
Jun 02 22:27:19 ot-master1.internal.local kubelet[1075113]: E0602 22:27:19.377307 1075113 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get “https://Master_IP:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=24424”: dial tcp Master_IP:6443: connect: connection refused
Jun 02 22:27:19 ot-master1.internal.local kubelet[1075113]: E0602 22:27:19.593067 1075113 kubelet.go:2900] “Container runtime network not ready” networkReady=“NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized”
Jun 02 22:27:22 ot-master1.internal.local kubelet[1075113]: I0602 22:27:22.319151 1075113 scope.go:117] “RemoveContainer” containerID=“7aa45d44e92ea5d438b53120bb3ac6eed7734ba2c79cf3ca94fa5e81c38e9588”
Jun 02 22:27:22 ot-master1.internal.local kubelet[1075113]: E0602 22:27:22.319465 1075113 pod_workers.go:1298] “Error syncing pod, skipping” err="failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ot-master1.internal.local_kube-system(2789fb1c6262f964cf>
lines 942-1000/1000 (END)

If I performed the below steps below for all the workers and master than I can connect to API\ then after few secs it blocked the API

iptables -L
iptables -F
iptables -X
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -L

Thank you for all your support/time!