Kubeadm init fails with [kubelet-check] Initial timeout

I use kubeadm to install kubernetes cluster.

when I run the command kubeadm init, it failed with [kubelet-check] Initial timeout of 40s passed.

Here comes the detail

$ sudo kubeadm init --apiserver-advertise-address=10.112.55.6 --pod-network-cidr=172.16.0.0/16  
  --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers  --v=6
# I1225 17:21:32.199300   12963 initconfiguration.go:104] detected and using CRI socket: /var/run/dockershim.sock
I1225 17:21:32.435079   12963 version.go:182] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt
[init] Using Kubernetes version: v1.20.1
[preflight] Running pre-flight checks
I1225 17:21:33.162619   12963 checks.go:577] validating Kubernetes and kubeadm version
I1225 17:21:33.162846   12963 checks.go:166] validating if the firewall is enabled and active
I1225 17:21:33.185708   12963 checks.go:201] validating availability of port 6443
I1225 17:21:33.186242   12963 checks.go:201] validating availability of port 10259
I1225 17:21:33.186464   12963 checks.go:201] validating availability of port 10257
I1225 17:21:33.186618   12963 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I1225 17:21:33.186835   12963 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I1225 17:21:33.186958   12963 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I1225 17:21:33.187059   12963 checks.go:286] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I1225 17:21:33.187164   12963 checks.go:432] validating if the connectivity type is via proxy or direct
I1225 17:21:33.187301   12963 checks.go:471] validating http connectivity to first IP address in the CIDR
I1225 17:21:33.187470   12963 checks.go:471] validating http connectivity to first IP address in the CIDR
I1225 17:21:33.187564   12963 checks.go:102] validating the container runtime
I1225 17:21:33.475186   12963 checks.go:128] validating if the "docker" service is enabled and active
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kuber              netes.io/docs/setup/cri/
I1225 17:21:33.713959   12963 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I1225 17:21:33.714071   12963 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward
I1225 17:21:33.714126   12963 checks.go:649] validating whether swap is enabled or not
I1225 17:21:33.714214   12963 checks.go:376] validating the presence of executable conntrack
I1225 17:21:33.714280   12963 checks.go:376] validating the presence of executable ip
I1225 17:21:33.714336   12963 checks.go:376] validating the presence of executable iptables
I1225 17:21:33.714377   12963 checks.go:376] validating the presence of executable mount
I1225 17:21:33.714424   12963 checks.go:376] validating the presence of executable nsenter
I1225 17:21:33.714465   12963 checks.go:376] validating the presence of executable ebtables
I1225 17:21:33.714504   12963 checks.go:376] validating the presence of executable ethtool
I1225 17:21:33.714546   12963 checks.go:376] validating the presence of executable socat
I1225 17:21:33.714584   12963 checks.go:376] validating the presence of executable tc
I1225 17:21:33.714623   12963 checks.go:376] validating the presence of executable touch
I1225 17:21:33.714665   12963 checks.go:520] running all checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.1. Latest validated version: 19.03
I1225 17:21:33.987177   12963 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I1225 17:21:33.987218   12963 checks.go:618] validating kubelet version
I1225 17:21:34.159634   12963 checks.go:128] validating if the "kubelet" service is enabled and active
I1225 17:21:34.198220   12963 checks.go:201] validating availability of port 10250
I1225 17:21:34.198371   12963 checks.go:201] validating availability of port 2379
I1225 17:21:34.198420   12963 checks.go:201] validating availability of port 2380
I1225 17:21:34.198472   12963 checks.go:249] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1225 17:21:34.291094   12963 checks.go:839] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.1
I1225 17:21:34.399945   12963 checks.go:839] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.1
I1225 17:21:34.500275   12963 checks.go:839] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.1
I1225 17:21:34.598838   12963 checks.go:839] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.1
I1225 17:21:34.694850   12963 checks.go:839] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
I1225 17:21:34.793574   12963 checks.go:839] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
I1225 17:21:34.886867   12963 checks.go:839] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I1225 17:21:34.887041   12963 certs.go:110] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I1225 17:21:36.654618   12963 certs.go:474] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master-node] and IPs               [10.96.0.1 10.112.55.6]
[certs] Generating "apiserver-kubelet-client" certificate and key
I1225 17:21:37.381231   12963 certs.go:110] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I1225 17:21:37.693587   12963 certs.go:474] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I1225 17:21:38.087981   12963 certs.go:110] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I1225 17:21:38.535640   12963 certs.go:474] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-node] and IPs [10.112.55.6 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-node] and IPs [10.112.55.6 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I1225 17:21:40.814660   12963 certs.go:76] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1225 17:21:41.064910   12963 kubeconfig.go:101] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I1225 17:21:41.610653   12963 kubeconfig.go:101] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I1225 17:21:41.972640   12963 kubeconfig.go:101] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1225 17:21:42.483821   12963 kubeconfig.go:101] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I1225 17:21:42.862547   12963 kubelet.go:63] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I1225 17:21:43.228775   12963 manifests.go:96] [control-plane] getting StaticPodSpecs
I1225 17:21:43.229478   12963 certs.go:474] validating certificate period for CA certificate
I1225 17:21:43.229734   12963 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I1225 17:21:43.229778   12963 manifests.go:109] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I1225 17:21:43.229805   12963 manifests.go:109] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I1225 17:21:43.229829   12963 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I1225 17:21:43.229856   12963 manifests.go:109] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I1225 17:21:43.229887   12963 manifests.go:109] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I1225 17:21:43.266156   12963 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.              yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I1225 17:21:43.266285   12963 manifests.go:96] [control-plane] getting StaticPodSpecs
I1225 17:21:43.266934   12963 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I1225 17:21:43.266970   12963 manifests.go:109] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I1225 17:21:43.266990   12963 manifests.go:109] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I1225 17:21:43.267009   12963 manifests.go:109] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I1225 17:21:43.267027   12963 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I1225 17:21:43.267045   12963 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I1225 17:21:43.267064   12963 manifests.go:109] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I1225 17:21:43.267083   12963 manifests.go:109] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I1225 17:21:43.273446   12963 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-c              ontroller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I1225 17:21:43.273524   12963 manifests.go:96] [control-plane] getting StaticPodSpecs
I1225 17:21:43.274170   12963 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I1225 17:21:43.275516   12963 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1225 17:21:43.280540   12963 local.go:74] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I1225 17:21:43.280656   12963 waitcontrolplane.go:87] [wait-control-plane] Waiting for the API server to be healthy
I1225 17:21:43.283314   12963 loader.go:379] Config loaded from file:  /etc/kubernetes/admin.conf 

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1225 17:21:43.286511   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
I1225 17:21:43.787523   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
I1225 17:21:44.287257   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
I1225 17:21:44.787238   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
I1225 17:21:45.287847   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
I1225 17:21:45.787793   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
I1225 17:21:46.287736   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
I1225 17:21:46.787863   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
.......
I1225 17:22:20.787816   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
I1225 17:22:21.287825   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
I1225 17:22:21.787458   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
I1225 17:22:22.287273   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
I1225 17:22:22.787383   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds

[kubelet-check] Initial timeout of 40s passed.
I1225 17:22:23.287900   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
I1225 17:22:23.787358   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
I1225 17:22:24.287797   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
I1225 17:22:24.787828   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
I1225 17:22:25.288330   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
I1225 17:22:25.788017   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
.......
I1225 17:25:40.287320   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
I1225 17:25:40.787783   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
I1225 17:25:41.287783   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
I1225 17:25:41.787836   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
I1225 17:25:42.287748   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
I1225 17:25:42.787906   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
I1225 17:25:43.287794   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds
I1225 17:25:43.288576   12963 round_trippers.go:445] GET https://10.112.55.6:6443/healthz?timeout=10s  in 0 milliseconds

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:114
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:151
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:850
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:204
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1374
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:151
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:850
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:204
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1374

And this is the kubelet status

kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2020-12-25 17:21:43 CST; 8min ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 13178 (kubelet)
    Tasks: 14 (limit: 4915)
   CGroup: /system.slice/kubelet.service
           └─13178 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml

Dec 25 17:30:27 master-node kubelet[13178]: E1225 17:30:27.463303   13178 kubelet.go:2240] node "master-node" not found
Dec 25 17:30:27 master-node kubelet[13178]: E1225 17:30:27.563495   13178 kubelet.go:2240] node "master-node" not found
Dec 25 17:30:27 master-node kubelet[13178]: E1225 17:30:27.663710   13178 kubelet.go:2240] node "master-node" not found
Dec 25 17:30:27 master-node kubelet[13178]: E1225 17:30:27.764033   13178 kubelet.go:2240] node "master-node" not found
Dec 25 17:30:27 master-node kubelet[13178]: E1225 17:30:27.864314   13178 kubelet.go:2240] node "master-node" not found
Dec 25 17:30:27 master-node kubelet[13178]: E1225 17:30:27.964531   13178 kubelet.go:2240] node "master-node" not found
Dec 25 17:30:28 master-node kubelet[13178]: E1225 17:30:28.064708   13178 kubelet.go:2240] node "master-node" not found
Dec 25 17:30:28 master-node kubelet[13178]: E1225 17:30:28.082499   13178 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://10.112.55.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-node?timeout=10s": dial tcp 10.112.55.6:6443: connect: connection refused

Dec 25 17:30:28 master-node kubelet[13178]: E1225 17:30:28.164942   13178 kubelet.go:2240] node "master-node" not found
Dec 25 17:30:28 master-node kubelet[13178]: E1225 17:30:28.265594   13178 kubelet.go:2240] node "master-node" not found

All the info show that it can’t connect to https://10.112.55.6:6443. But 10.112.55.6 is the localhost ip, so I’m very confused.

Who can help me? Thanks very much.

Besides, i have turned off the swap and firewalld.

$ free -m
#               total        used        free      shared  buff/cache   available
# Mem:           5819         829        1549           5        3440        4711
# Swap:             0           0           0
$ sudo ufw status
# Status: inactive

Cluster information:

Kubernetes version:

$ kubelet --version
# Kubernetes v1.20.1

Cloud being used: bare-metal
Installation method: kubeadm
Host OS:

$ uname -a
# Linux master-node 5.4.0-59-generic #65~18.04.1-Ubuntu SMP Mon Dec 14 15:59:40 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

Are there any useful logs from the kube-apiserver container? or some of the other kube component containers?

there is no containers at all.

i am having exactly the same issue. do we have any solution for it?

Hi, I just met the same issue during the K8S cluster deployment for v1.24.0, may I know if there is an resolution for this?