Kubeadm init error in [wait-control-plane] phase

Hello, I’m setting up my first Kubernetes master node and during “kubeadm init --pod-network-cidr 192.168.0.0/16 --image-repository=registry.aliyuncs.com/google_containers”, I met below error when in phase of [wait-control-plane]. My master node IP is “192.168.31.103” and I want to use my Kubernetes in my testing environment cover IP segment “192.168.0.0/16”. I suppose somewhere settings wrong as default and conflict with Kubernetes initial as below shows. Please give me some suggestion about how to. I’m a new learner about Kubernetes. Thanks.

[root@k8s manifests]# kubeadm init --pod-network-cidr 192.168.0.0/16 --image-repository=registry.aliyuncs.com/google_containers

[init] Using Kubernetes version: v1.22.0

[preflight] Running pre-flight checks

[preflight] Pulling images required for setting up a Kubernetes cluster

[preflight] This might take a minute or two, depending on the speed of your internet connection

[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’

[certs] Using certificateDir folder “/etc/kubernetes/pki”

[certs] Generating “ca” certificate and key

[certs] Generating “apiserver” certificate and key

[certs] apiserver serving cert is signed for DNS names [k8s kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.31.103]

[certs] Generating “apiserver-kubelet-client” certificate and key

[certs] Generating “front-proxy-ca” certificate and key

[certs] Generating “front-proxy-client” certificate and key

[certs] Generating “etcd/ca” certificate and key

[certs] Generating “etcd/server” certificate and key

[certs] etcd/server serving cert is signed for DNS names [k8s localhost] and IPs [192.168.31.103 127.0.0.1 ::1]

[certs] Generating “etcd/peer” certificate and key

[certs] etcd/peer serving cert is signed for DNS names [k8s localhost] and IPs [192.168.31.103 127.0.0.1 ::1]

[certs] Generating “etcd/healthcheck-client” certificate and key

[certs] Generating “apiserver-etcd-client” certificate and key

[certs] Generating “sa” key and public key

[kubeconfig] Using kubeconfig folder “/etc/kubernetes”

[kubeconfig] Writing “admin.conf” kubeconfig file

[kubeconfig] Writing “kubelet.conf” kubeconfig file

[kubeconfig] Writing “controller-manager.conf” kubeconfig file

[kubeconfig] Writing “scheduler.conf” kubeconfig file

[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”

[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”

[kubelet-start] Starting the kubelet

[control-plane] Using manifest folder “/etc/kubernetes/manifests”

[control-plane] Creating static Pod manifest for “kube-apiserver”

[control-plane] Creating static Pod manifest for “kube-controller-manager”

[control-plane] Creating static Pod manifest for “kube-scheduler”

[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s

[kubelet-check] Initial timeout of 40s passed.

[kubelet-check] It seems like the kubelet isn’t running or healthy.

[kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get “http://localhost:10248/healthz”: dial tcp 127.0.0.1:10248: connect: connection refused.

Unfortunately, an error has occurred:

timed out waiting for the condition

This error is likely caused by:

  • The kubelet is not running

  • The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:

  • ‘systemctl status kubelet’

  • ‘journalctl -xeu kubelet’

Additionally, a control plane component may have crashed or exited when started by the container runtime.

To troubleshoot, list all containers using your preferred container runtimes CLI.

Here is one example how you may list all Kubernetes containers running in docker:

  • ‘docker ps -a | grep kube | grep -v pause’

Once you have found the failing container, you can inspect its logs with:

  • ‘docker logs CONTAINERID’

error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster

To see the stack trace of this error execute with --v=5 or higher

Cluster information:

Kubernetes version: v1.22.0
Cloud being used: (put bare-metal if not on a public cloud) based on Mac OS + VMWare fusion private cloud system
Installation method: from KB in internet
Host OS: suse 15 sp3
CNI and version: unknown
CRI and version: unknown

1 Like

so how you solve it?

how it fixed ? I am facing same issue