Waiting for the API server to be healthy couldn't initialize a Kubernetes cluster

Asking for help? Comment out what you need so we can get more information to help you!
I am setting up a kudeadm on rhel8 vms on prem using ansible playbooks, however, when i run the kube init am getting the following errror:

[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

Error Logs from Kubelet:

I0820 01:25:31.562757 3179613 kubelet.go:402] "Kubelet is running in standalone mode, will skip"
I0820 01:25:31.564391 3179613 kubelet.go:1618] "No API server defined - no node status update"
I0820 01:25:31.563669 3179613 volume_host.go:77] "KubeClient is nil. Skip initialization of CSID"
E0820 01:25:31.639179 3179613 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down]"
I0820 01:25:31.574185 3179613 factory.go:219] Registration of the crio container factory failed:
E0820 01:25:31.571963 3179613 kubelet.go:1462] "Image garbage collection failed once. Stats init"

Cluster information:

Kubernetes version: 1.29.7
Cloud being used: On Prem VMs
Installation method: using Ansible playbooks
Host OS: Rhel 8
CNI and version: Flannel
CRI and version: Containerd 1.6.32

Kubeadm output logs

I0820 01:20:15.455571 3178467 initconfiguration.go:122] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
I0820 01:20:15.455766 3178467 interface.go:432] Looking for default routes with IPv4 addresses
I0820 01:20:15.455773 3178467 interface.go:437] Default route transits interface "ens192"
I0820 01:20:15.455893 3178467 interface.go:209] Interface ens192 is up
I0820 01:20:15.455930 3178467 interface.go:257] Interface "ens192" has 1 addresses :[10.179.193.75/24].
I0820 01:20:15.455938 3178467 interface.go:224] Checking addr  10.179.193.75/24.
I0820 01:20:15.455944 3178467 interface.go:231] IP found 10.179.193.75
I0820 01:20:15.455957 3178467 interface.go:263] Found valid IPv4 address 10.179.193.75 for interface "ens192".
I0820 01:20:15.455963 3178467 interface.go:443] Found active IP 10.179.193.75 
I0820 01:20:15.455987 3178467 kubelet.go:196] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
I0820 01:20:15.466329 3178467 checks.go:563] validating Kubernetes and kubeadm version
I0820 01:20:15.466365 3178467 checks.go:168] validating if the firewall is enabled and active
I0820 01:20:15.489790 3178467 checks.go:203] validating availability of port 6443
I0820 01:20:15.490423 3178467 checks.go:203] validating availability of port 10259
I0820 01:20:15.490522 3178467 checks.go:203] validating availability of port 10257
I0820 01:20:15.490630 3178467 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0820 01:20:15.490675 3178467 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0820 01:20:15.490698 3178467 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0820 01:20:15.490714 3178467 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0820 01:20:15.490733 3178467 checks.go:430] validating if the connectivity type is via proxy or direct
I0820 01:20:15.490838 3178467 checks.go:469] validating http connectivity to first IP address in the CIDR
I0820 01:20:15.490881 3178467 checks.go:469] validating http connectivity to first IP address in the CIDR
I0820 01:20:15.490909 3178467 checks.go:104] validating the container runtime
I0820 01:20:15.553517 3178467 checks.go:639] validating whether swap is enabled or not
I0820 01:20:15.553828 3178467 checks.go:370] validating the presence of executable crictl
I0820 01:20:15.553902 3178467 checks.go:370] validating the presence of executable conntrack
I0820 01:20:15.554071 3178467 checks.go:370] validating the presence of executable ip
I0820 01:20:15.554108 3178467 checks.go:370] validating the presence of executable iptables
I0820 01:20:15.554154 3178467 checks.go:370] validating the presence of executable mount
I0820 01:20:15.554327 3178467 checks.go:370] validating the presence of executable nsenter
I0820 01:20:15.554373 3178467 checks.go:370] validating the presence of executable ebtables
I0820 01:20:15.554405 3178467 checks.go:370] validating the presence of executable ethtool
I0820 01:20:15.554429 3178467 checks.go:370] validating the presence of executable socat
I0820 01:20:15.554461 3178467 checks.go:370] validating the presence of executable tc
I0820 01:20:15.554486 3178467 checks.go:370] validating the presence of executable touch
I0820 01:20:15.554520 3178467 checks.go:516] running all checks
I0820 01:20:15.568320 3178467 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
I0820 01:20:15.568659 3178467 checks.go:605] validating kubelet version
I0820 01:20:15.629837 3178467 checks.go:130] validating if the "kubelet" service is enabled and active
I0820 01:20:15.656501 3178467 checks.go:203] validating availability of port 10250
I0820 01:20:15.656905 3178467 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0820 01:20:15.656957 3178467 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0820 01:20:15.657011 3178467 checks.go:203] validating availability of port 2379
I0820 01:20:15.657060 3178467 checks.go:203] validating availability of port 2380
I0820 01:20:15.657095 3178467 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd
I0820 01:20:15.657378 3178467 checks.go:828] using image pull policy: IfNotPresent
I0820 01:20:15.688177 3178467 checks.go:854] pulling: registry.k8s.io/kube-apiserver:v1.29.7
I0820 01:21:16.321413 3178467 checks.go:854] pulling: registry.k8s.io/kube-controller-manager:v1.29.7
I0820 01:22:12.879993 3178467 checks.go:854] pulling: registry.k8s.io/kube-scheduler:v1.29.7
I0820 01:22:48.931985 3178467 checks.go:854] pulling: registry.k8s.io/kube-proxy:v1.29.7
I0820 01:23:24.002036 3178467 checks.go:854] pulling: registry.k8s.io/coredns/coredns:v1.11.1
I0820 01:24:01.065061 3178467 checks.go:854] pulling: registry.k8s.io/pause:3.9
I0820 01:24:10.196047 3178467 checks.go:854] pulling: registry.k8s.io/etcd:3.5.12-0
I0820 01:25:25.392961 3178467 certs.go:112] creating a new certificate authority for ca
I0820 01:25:25.978215 3178467 certs.go:519] validating certificate period for ca certificate
I0820 01:25:26.745643 3178467 certs.go:112] creating a new certificate authority for front-proxy-ca
I0820 01:25:27.064819 3178467 certs.go:519] validating certificate period for front-proxy-ca certificate
I0820 01:25:27.483012 3178467 certs.go:112] creating a new certificate authority for etcd-ca
I0820 01:25:27.668619 3178467 certs.go:519] validating certificate period for etcd/ca certificate
I0820 01:25:29.552095 3178467 certs.go:78] creating new public/private key files for signing service account users
I0820 01:25:29.687441 3178467 kubeconfig.go:112] creating kubeconfig file for admin.conf
I0820 01:25:29.870492 3178467 kubeconfig.go:112] creating kubeconfig file for super-admin.conf
I0820 01:25:30.068247 3178467 kubeconfig.go:112] creating kubeconfig file for kubelet.conf
I0820 01:25:30.261966 3178467 kubeconfig.go:112] creating kubeconfig file for controller-manager.conf
I0820 01:25:30.538109 3178467 kubeconfig.go:112] creating kubeconfig file for scheduler.conf
I0820 01:25:30.725911 3178467 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0820 01:25:30.725940 3178467 manifests.go:102] [control-plane] getting StaticPodSpecs
I0820 01:25:30.726182 3178467 certs.go:519] validating certificate period for CA certificate
I0820 01:25:30.726240 3178467 manifests.go:128] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0820 01:25:30.726246 3178467 manifests.go:128] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I0820 01:25:30.726250 3178467 manifests.go:128] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0820 01:25:30.727049 3178467 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I0820 01:25:30.727065 3178467 manifests.go:102] [control-plane] getting StaticPodSpecs
I0820 01:25:30.727239 3178467 manifests.go:128] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0820 01:25:30.727244 3178467 manifests.go:128] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I0820 01:25:30.727247 3178467 manifests.go:128] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0820 01:25:30.727251 3178467 manifests.go:128] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0820 01:25:30.727254 3178467 manifests.go:128] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0820 01:25:30.729532 3178467 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I0820 01:25:30.729559 3178467 manifests.go:102] [control-plane] getting StaticPodSpecs
I0820 01:25:30.729791 3178467 manifests.go:128] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0820 01:25:30.730956 3178467 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I0820 01:25:30.730970 3178467 kubelet.go:68] Stopping the kubelet
I0820 01:25:30.949012 3178467 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:109
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:124
github.com/spf13/cobra.(*Command).execute
	github.com/spf13/cobra@v1.7.0/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
	github.com/spf13/cobra@v1.7.0/command.go:1068
github.com/spf13/cobra.(*Command).Execute
	github.com/spf13/cobra@v1.7.0/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
	k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
	k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	runtime/proc.go:271
runtime.goexit
	runtime/asm_amd64.s:1695
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:124
github.com/spf13/cobra.(*Command).execute
	github.com/spf13/cobra@v1.7.0/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
	github.com/spf13/cobra@v1.7.0/command.go:1068
github.com/spf13/cobra.(*Command).Execute
	github.com/spf13/cobra@v1.7.0/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
	k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
	k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	runtime/proc.go:271
runtime.goexit
	runtime/asm_amd64.s:1695

can anyone say, what is the fix for this

Hi,

Could you please elaborate where is this error occurring on a data plane node or control plane node?

From the error logs, it appears like a connectivity issue, would recommend checking if all the servers are accessible.

Hi. I hope your business is successful. Did you solve the API server problem? I have the same problem. If you solved it, please help me. Goodbye.