Kubeadm init failed api server not healthy

Hello,

Trying to initialize a k8s cluster through kubeadm, facing following issue. Attached screenshots of the same.

W0501 18:13:13.516865 1896313 version.go:104] could not fetch a Kubernetes version from the internet: unable to get URL “https://dl.k8s.io/release/stable-1.txt”: Get “https://dl.k8s.io/release/stable-1.txt”: dial tcp 34.107.204.206:443: connect: connection refused
W0501 18:13:13.516937 1896313 version.go:105] falling back to the local client version: v1.30.0
[init] Using Kubernetes version: v1.30.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
W0501 18:13:13.654828 1896313 checks.go:844] detected that the sandbox image “registry.k8s.io/pause:3.8” of the container runtime is inconsistent with that used by kubeadm.It is recommended to use “registry.k8s.io/pause:3.9” as the CRI sandbox image.
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master-node] and IPs [10.96.0.1 X.X.X.X

]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] Generating “etcd/ca” certificate and key
[certs] Generating “etcd/server” certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-node] and IPs [X.X.X.X 127.0.0.1 ::1]
[certs] Generating “etcd/peer” certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-node] and IPs [10.159.108.25 127.0.0.1 ::1]
[certs] Generating “etcd/healthcheck-client” certificate and key
[certs] Generating “apiserver-etcd-client” certificate and key
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “super-admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001742882s
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is not healthy after 4m0.001168712s

Unfortunately, an error has occurred:
context deadline exceeded

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- ‘systemctl status kubelet’
- ‘journalctl -xeu kubelet’

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- ‘crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause’
Once you have found the failing container, you can inspect its logs with:
- ‘crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID’
error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

i got the same proplem

If you get any update regarding this issue please let me know, thanks

Which distribution are you using?
I mean about OS

Hello, I have the same problem.
Kubernetes v1.30.0
OS RHEL 8.9
crio version 1.31.0

View the kubelet logs by running this command: journalctl -xeu kubelet

If the following error is reported:

“CreatePodSandbox for pod failed” err="rpc error: Code = Unknown desc = failed pulling image \ “registry. K8s. IO/pause: 3.9 " : the Error response from the daemon: The Head \ “https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.9\” : dial TCP 108.177.125.82:443: i/o timeout”

Presumably because you are in an area with limited Internet access or a slow Internet connection, consider using a mirror accelerator or setting up a mirror in your local mirror repository.

For Docker users, you can configure the Docker daemon to use the image accelerator.
For users who use containerd, you can modify /etc/containerd/config.toml to configure the mirroring repository image.

As a temporary solution for Docker users: you can try manually pulling the desired pause image from an accessible image repository and then re-tagging it and pushing it to your private repository or using it locally.

Docker pull registry.aliyuncs.com/google_containers/pause:3.9
Docker tag registry.aliyuncs.com/google_containers/pause:3.9 registry. K8s. IO/pause: 3.9

Then re-execute kubeadm reset and kubeadm init to successfully build the cluster.

Will give it a try thanks

Hello,

I got the same issue with ‘kubeadm init --pod-network-cidr=10.244.0.0/16 -v=5’

Kubernetes v1.30.1
AlmaLinux 9.4

Below the result for ‘journalctl -xeu kubelet’

May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.906368 2891 server.go:484] “Kubelet version” kubeletVersion=“v1.30.1”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.906867 2891 server.go:486] “Golang settings” GOGC=“” GOMAXPROCS=“” GOTRACEBACK=“”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.907111 2891 server.go:647] “Standalone mode, no API client”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.913877 2891 server.go:535] “No api server defined - no events will be sent to API server”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.913899 2891 server.go:742] “–cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.914180 2891 container_manager_linux.go:265] “Container manager verified user specified cgroup-root exists” cgroupRoot=
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.914211 2891 container_manager_linux.go:270] “Creating Container Manager object based on Node Config” nodeConfig={“NodeName”:“localhost.localdomain”,">
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.914366 2891 topology_manager.go:138] “Creating topology manager with none policy”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.914379 2891 container_manager_linux.go:301] “Creating device plugin manager”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.914452 2891 state_mem.go:36] “Initialized new in-memory state store”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.914512 2891 kubelet.go:406] “Kubelet is running in standalone mode, will skip API server sync”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.915155 2891 kuberuntime_manager.go:261] “Container runtime initialized” containerRuntime=“containerd” version=“1.6.31” apiVersion=“v1”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.915385 2891 kubelet.go:815] “Not starting ClusterTrustBundle informer because we are in static kubelet mode”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.915408 2891 volume_host.go:77] “KubeClient is nil. Skip initialization of CSIDriverLister”
May 19 18:19:11 localhost.localdomain kubelet[2891]: W0519 18:19:11.915530 2891 csi_plugin.go:202] kubernetes.io/csi: kubeclient not set, assuming standalone kubelet
May 19 18:19:11 localhost.localdomain kubelet[2891]: W0519 18:19:11.915548 2891 csi_plugin.go:279] Skipping CSINode initialization, kubelet running in standalone mode
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.915796 2891 server.go:1264] “Started kubelet”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.915953 2891 kubelet.go:1615] “No API server defined - no node status update will be sent”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.916476 2891 server.go:195] “Starting to listen read-only” address=“0.0.0.0” port=10255
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.917319 2891 fs_resource_analyzer.go:67] “Starting FS ResourceAnalyzer”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.918216 2891 ratelimit.go:55] “Setting rate limiting for endpoint” service=“podresources” qps=100 burstTokens=10
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.918503 2891 server.go:227] “Starting to serve the podresources API” endpoint=“unix:/var/lib/kubelet/pod-resources/kubelet.sock”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.918711 2891 server.go:163] “Starting to listen” address=“0.0.0.0” port=10250
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.919880 2891 server.go:455] “Adding debug handlers to kubelet server”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.922347 2891 volume_manager.go:291] “Starting Kubelet Volume Manager”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.922438 2891 desired_state_of_world_populator.go:149] “Desired state populator starts to run”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.922498 2891 reconciler.go:26] “Reconciler: start to sync state”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.924126 2891 factory.go:221] Registration of the systemd container factory successfully
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.924247 2891 factory.go:219] Registration of the crio container factory failed: Get “http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info”: dial unix /var/run/>
May 19 18:19:11 localhost.localdomain kubelet[2891]: E0519 18:19:11.925773 2891 kubelet.go:1467] “Image garbage collection failed once. Stats initialization may not have completed yet” err="invalid capacity 0 on image>
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.929109 2891 factory.go:221] Registration of the containerd container factory successfully
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.947416 2891 cpu_manager.go:214] “Starting CPU manager” policy=“none”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.947434 2891 cpu_manager.go:215] “Reconciling” reconcilePeriod=“10s”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.947450 2891 state_mem.go:36] “Initialized new in-memory state store”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.948394 2891 policy_none.go:49] “None policy: Start”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.949000 2891 memory_manager.go:170] “Starting memorymanager” policy=“None”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.949024 2891 state_mem.go:35] “Initializing new in-memory state store”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.950441 2891 manager.go:479] “Failed to read data from checkpoint” checkpoint=“kubelet_internal_checkpoint” err=“checkpoint is not found”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.950636 2891 container_log_manager.go:186] “Initializing container log rotate workers” workers=1 monitorPeriod=“10s”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.950735 2891 plugin_manager.go:118] “Starting Kubelet Plugin Manager”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.952524 2891 kubelet_network_linux.go:50] “Initialized iptables rules.” protocol=“IPv4”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.958029 2891 kubelet_network_linux.go:50] “Initialized iptables rules.” protocol=“IPv6”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.958053 2891 status_manager.go:213] “Kubernetes client is nil, not starting status manager”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.958063 2891 kubelet.go:2337] “Starting kubelet main sync loop”
May 19 18:19:11 localhost.localdomain kubelet[2891]: E0519 18:19:11.958385 2891 kubelet.go:2361] “Skipping pod synchronization” err=“PLEG is not healthy: pleg has yet to be successful”
May 19 18:19:12 localhost.localdomain kubelet[2891]: I0519 18:19:12.023054 2891 desired_state_of_world_populator.go:157] “Finished populating initial desired state of world”