Kubeadm init failed api server not healthy

Hello,

Trying to initialize a k8s cluster through kubeadm, facing following issue. Attached screenshots of the same.

W0501 18:13:13.516865 1896313 version.go:104] could not fetch a Kubernetes version from the internet: unable to get URL “https://dl.k8s.io/release/stable-1.txt”: Get “https://dl.k8s.io/release/stable-1.txt”: dial tcp 34.107.204.206:443: connect: connection refused
W0501 18:13:13.516937 1896313 version.go:105] falling back to the local client version: v1.30.0
[init] Using Kubernetes version: v1.30.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
W0501 18:13:13.654828 1896313 checks.go:844] detected that the sandbox image “registry.k8s.io/pause:3.8” of the container runtime is inconsistent with that used by kubeadm.It is recommended to use “registry.k8s.io/pause:3.9” as the CRI sandbox image.
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master-node] and IPs [10.96.0.1 X.X.X.X

]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] Generating “etcd/ca” certificate and key
[certs] Generating “etcd/server” certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-node] and IPs [X.X.X.X 127.0.0.1 ::1]
[certs] Generating “etcd/peer” certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-node] and IPs [10.159.108.25 127.0.0.1 ::1]
[certs] Generating “etcd/healthcheck-client” certificate and key
[certs] Generating “apiserver-etcd-client” certificate and key
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “super-admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001742882s
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is not healthy after 4m0.001168712s

Unfortunately, an error has occurred:
context deadline exceeded

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- ‘systemctl status kubelet’
- ‘journalctl -xeu kubelet’

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- ‘crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause’
Once you have found the failing container, you can inspect its logs with:
- ‘crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID’
error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

1 Like

i got the same proplem

If you get any update regarding this issue please let me know, thanks

Which distribution are you using?
I mean about OS

Hello, I have the same problem.
Kubernetes v1.30.0
OS RHEL 8.9
crio version 1.31.0

View the kubelet logs by running this command: journalctl -xeu kubelet

If the following error is reported:

“CreatePodSandbox for pod failed” err="rpc error: Code = Unknown desc = failed pulling image \ “registry. K8s. IO/pause: 3.9 " : the Error response from the daemon: The Head \ “https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.9\” : dial TCP 108.177.125.82:443: i/o timeout”

Presumably because you are in an area with limited Internet access or a slow Internet connection, consider using a mirror accelerator or setting up a mirror in your local mirror repository.

For Docker users, you can configure the Docker daemon to use the image accelerator.
For users who use containerd, you can modify /etc/containerd/config.toml to configure the mirroring repository image.

As a temporary solution for Docker users: you can try manually pulling the desired pause image from an accessible image repository and then re-tagging it and pushing it to your private repository or using it locally.

Docker pull registry.aliyuncs.com/google_containers/pause:3.9
Docker tag registry.aliyuncs.com/google_containers/pause:3.9 registry. K8s. IO/pause: 3.9

Then re-execute kubeadm reset and kubeadm init to successfully build the cluster.

Will give it a try thanks

Hello,

I got the same issue with ‘kubeadm init --pod-network-cidr=10.244.0.0/16 -v=5’

Kubernetes v1.30.1
AlmaLinux 9.4

Below the result for ‘journalctl -xeu kubelet’

May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.906368 2891 server.go:484] “Kubelet version” kubeletVersion=“v1.30.1”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.906867 2891 server.go:486] “Golang settings” GOGC=“” GOMAXPROCS=“” GOTRACEBACK=“”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.907111 2891 server.go:647] “Standalone mode, no API client”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.913877 2891 server.go:535] “No api server defined - no events will be sent to API server”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.913899 2891 server.go:742] “–cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.914180 2891 container_manager_linux.go:265] “Container manager verified user specified cgroup-root exists” cgroupRoot=
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.914211 2891 container_manager_linux.go:270] “Creating Container Manager object based on Node Config” nodeConfig={“NodeName”:“localhost.localdomain”,">
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.914366 2891 topology_manager.go:138] “Creating topology manager with none policy”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.914379 2891 container_manager_linux.go:301] “Creating device plugin manager”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.914452 2891 state_mem.go:36] “Initialized new in-memory state store”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.914512 2891 kubelet.go:406] “Kubelet is running in standalone mode, will skip API server sync”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.915155 2891 kuberuntime_manager.go:261] “Container runtime initialized” containerRuntime=“containerd” version=“1.6.31” apiVersion=“v1”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.915385 2891 kubelet.go:815] “Not starting ClusterTrustBundle informer because we are in static kubelet mode”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.915408 2891 volume_host.go:77] “KubeClient is nil. Skip initialization of CSIDriverLister”
May 19 18:19:11 localhost.localdomain kubelet[2891]: W0519 18:19:11.915530 2891 csi_plugin.go:202] kubernetes.io/csi: kubeclient not set, assuming standalone kubelet
May 19 18:19:11 localhost.localdomain kubelet[2891]: W0519 18:19:11.915548 2891 csi_plugin.go:279] Skipping CSINode initialization, kubelet running in standalone mode
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.915796 2891 server.go:1264] “Started kubelet”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.915953 2891 kubelet.go:1615] “No API server defined - no node status update will be sent”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.916476 2891 server.go:195] “Starting to listen read-only” address=“0.0.0.0” port=10255
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.917319 2891 fs_resource_analyzer.go:67] “Starting FS ResourceAnalyzer”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.918216 2891 ratelimit.go:55] “Setting rate limiting for endpoint” service=“podresources” qps=100 burstTokens=10
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.918503 2891 server.go:227] “Starting to serve the podresources API” endpoint=“unix:/var/lib/kubelet/pod-resources/kubelet.sock”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.918711 2891 server.go:163] “Starting to listen” address=“0.0.0.0” port=10250
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.919880 2891 server.go:455] “Adding debug handlers to kubelet server”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.922347 2891 volume_manager.go:291] “Starting Kubelet Volume Manager”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.922438 2891 desired_state_of_world_populator.go:149] “Desired state populator starts to run”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.922498 2891 reconciler.go:26] “Reconciler: start to sync state”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.924126 2891 factory.go:221] Registration of the systemd container factory successfully
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.924247 2891 factory.go:219] Registration of the crio container factory failed: Get “http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info”: dial unix /var/run/>
May 19 18:19:11 localhost.localdomain kubelet[2891]: E0519 18:19:11.925773 2891 kubelet.go:1467] “Image garbage collection failed once. Stats initialization may not have completed yet” err="invalid capacity 0 on image>
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.929109 2891 factory.go:221] Registration of the containerd container factory successfully
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.947416 2891 cpu_manager.go:214] “Starting CPU manager” policy=“none”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.947434 2891 cpu_manager.go:215] “Reconciling” reconcilePeriod=“10s”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.947450 2891 state_mem.go:36] “Initialized new in-memory state store”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.948394 2891 policy_none.go:49] “None policy: Start”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.949000 2891 memory_manager.go:170] “Starting memorymanager” policy=“None”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.949024 2891 state_mem.go:35] “Initializing new in-memory state store”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.950441 2891 manager.go:479] “Failed to read data from checkpoint” checkpoint=“kubelet_internal_checkpoint” err=“checkpoint is not found”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.950636 2891 container_log_manager.go:186] “Initializing container log rotate workers” workers=1 monitorPeriod=“10s”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.950735 2891 plugin_manager.go:118] “Starting Kubelet Plugin Manager”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.952524 2891 kubelet_network_linux.go:50] “Initialized iptables rules.” protocol=“IPv4”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.958029 2891 kubelet_network_linux.go:50] “Initialized iptables rules.” protocol=“IPv6”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.958053 2891 status_manager.go:213] “Kubernetes client is nil, not starting status manager”
May 19 18:19:11 localhost.localdomain kubelet[2891]: I0519 18:19:11.958063 2891 kubelet.go:2337] “Starting kubelet main sync loop”
May 19 18:19:11 localhost.localdomain kubelet[2891]: E0519 18:19:11.958385 2891 kubelet.go:2361] “Skipping pod synchronization” err=“PLEG is not healthy: pleg has yet to be successful”
May 19 18:19:12 localhost.localdomain kubelet[2891]: I0519 18:19:12.023054 2891 desired_state_of_world_populator.go:157] “Finished populating initial desired state of world”

Thank you for your tips. I had the same issue, however it solved.
by the way, is there any other way to change sandbox image to local registry?
and I am wondering why k8s does not use local image already downloaded.

I am also experiencing the same problem, could you share how you solved it?

have you checked container runtime config.
the following would help you to solve the problem.

got the same problem. I am using ubuntu 22.04

1 Like

Hi All, got the same problem,

Am using:
OS: RHEL 8
Docker 24.0.0
Containerd
Kubernetes 1.29.7

I0820 01:20:15.455571 3178467 initconfiguration.go:122] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
I0820 01:20:15.455766 3178467 interface.go:432] Looking for default routes with IPv4 addresses
I0820 01:20:15.455773 3178467 interface.go:437] Default route transits interface "ens192"
I0820 01:20:15.455893 3178467 interface.go:209] Interface ens192 is up
I0820 01:20:15.455930 3178467 interface.go:257] Interface "ens192" has 1 addresses :[10.179.193.75/24].
I0820 01:20:15.455938 3178467 interface.go:224] Checking addr  10.179.193.75/24.
I0820 01:20:15.455944 3178467 interface.go:231] IP found 10.179.193.75
I0820 01:20:15.455957 3178467 interface.go:263] Found valid IPv4 address 10.179.193.75 for interface "ens192".
I0820 01:20:15.455963 3178467 interface.go:443] Found active IP 10.179.193.75 
I0820 01:20:15.455987 3178467 kubelet.go:196] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
I0820 01:20:15.466329 3178467 checks.go:563] validating Kubernetes and kubeadm version
I0820 01:20:15.466365 3178467 checks.go:168] validating if the firewall is enabled and active
I0820 01:20:15.489790 3178467 checks.go:203] validating availability of port 6443
I0820 01:20:15.490423 3178467 checks.go:203] validating availability of port 10259
I0820 01:20:15.490522 3178467 checks.go:203] validating availability of port 10257
I0820 01:20:15.490630 3178467 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0820 01:20:15.490675 3178467 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0820 01:20:15.490698 3178467 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0820 01:20:15.490714 3178467 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0820 01:20:15.490733 3178467 checks.go:430] validating if the connectivity type is via proxy or direct
I0820 01:20:15.490838 3178467 checks.go:469] validating http connectivity to first IP address in the CIDR
I0820 01:20:15.490881 3178467 checks.go:469] validating http connectivity to first IP address in the CIDR
I0820 01:20:15.490909 3178467 checks.go:104] validating the container runtime
I0820 01:20:15.553517 3178467 checks.go:639] validating whether swap is enabled or not
I0820 01:20:15.553828 3178467 checks.go:370] validating the presence of executable crictl
I0820 01:20:15.553902 3178467 checks.go:370] validating the presence of executable conntrack
I0820 01:20:15.554071 3178467 checks.go:370] validating the presence of executable ip
I0820 01:20:15.554108 3178467 checks.go:370] validating the presence of executable iptables
I0820 01:20:15.554154 3178467 checks.go:370] validating the presence of executable mount
I0820 01:20:15.554327 3178467 checks.go:370] validating the presence of executable nsenter
I0820 01:20:15.554373 3178467 checks.go:370] validating the presence of executable ebtables
I0820 01:20:15.554405 3178467 checks.go:370] validating the presence of executable ethtool
I0820 01:20:15.554429 3178467 checks.go:370] validating the presence of executable socat
I0820 01:20:15.554461 3178467 checks.go:370] validating the presence of executable tc
I0820 01:20:15.554486 3178467 checks.go:370] validating the presence of executable touch
I0820 01:20:15.554520 3178467 checks.go:516] running all checks
I0820 01:20:15.568320 3178467 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
I0820 01:20:15.568659 3178467 checks.go:605] validating kubelet version
I0820 01:20:15.629837 3178467 checks.go:130] validating if the "kubelet" service is enabled and active
I0820 01:20:15.656501 3178467 checks.go:203] validating availability of port 10250
I0820 01:20:15.656905 3178467 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0820 01:20:15.656957 3178467 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0820 01:20:15.657011 3178467 checks.go:203] validating availability of port 2379
I0820 01:20:15.657060 3178467 checks.go:203] validating availability of port 2380
I0820 01:20:15.657095 3178467 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd
I0820 01:20:15.657378 3178467 checks.go:828] using image pull policy: IfNotPresent
I0820 01:20:15.688177 3178467 checks.go:854] pulling: registry.k8s.io/kube-apiserver:v1.29.7
I0820 01:21:16.321413 3178467 checks.go:854] pulling: registry.k8s.io/kube-controller-manager:v1.29.7
I0820 01:22:12.879993 3178467 checks.go:854] pulling: registry.k8s.io/kube-scheduler:v1.29.7
I0820 01:22:48.931985 3178467 checks.go:854] pulling: registry.k8s.io/kube-proxy:v1.29.7
I0820 01:23:24.002036 3178467 checks.go:854] pulling: registry.k8s.io/coredns/coredns:v1.11.1
I0820 01:24:01.065061 3178467 checks.go:854] pulling: registry.k8s.io/pause:3.9
I0820 01:24:10.196047 3178467 checks.go:854] pulling: registry.k8s.io/etcd:3.5.12-0
I0820 01:25:25.392961 3178467 certs.go:112] creating a new certificate authority for ca
I0820 01:25:25.978215 3178467 certs.go:519] validating certificate period for ca certificate
I0820 01:25:26.745643 3178467 certs.go:112] creating a new certificate authority for front-proxy-ca
I0820 01:25:27.064819 3178467 certs.go:519] validating certificate period for front-proxy-ca certificate
I0820 01:25:27.483012 3178467 certs.go:112] creating a new certificate authority for etcd-ca
I0820 01:25:27.668619 3178467 certs.go:519] validating certificate period for etcd/ca certificate
I0820 01:25:29.552095 3178467 certs.go:78] creating new public/private key files for signing service account users
I0820 01:25:29.687441 3178467 kubeconfig.go:112] creating kubeconfig file for admin.conf
I0820 01:25:29.870492 3178467 kubeconfig.go:112] creating kubeconfig file for super-admin.conf
I0820 01:25:30.068247 3178467 kubeconfig.go:112] creating kubeconfig file for kubelet.conf
I0820 01:25:30.261966 3178467 kubeconfig.go:112] creating kubeconfig file for controller-manager.conf
I0820 01:25:30.538109 3178467 kubeconfig.go:112] creating kubeconfig file for scheduler.conf
I0820 01:25:30.725911 3178467 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0820 01:25:30.725940 3178467 manifests.go:102] [control-plane] getting StaticPodSpecs
I0820 01:25:30.726182 3178467 certs.go:519] validating certificate period for CA certificate
I0820 01:25:30.726240 3178467 manifests.go:128] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0820 01:25:30.726246 3178467 manifests.go:128] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I0820 01:25:30.726250 3178467 manifests.go:128] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0820 01:25:30.727049 3178467 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I0820 01:25:30.727065 3178467 manifests.go:102] [control-plane] getting StaticPodSpecs
I0820 01:25:30.727239 3178467 manifests.go:128] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0820 01:25:30.727244 3178467 manifests.go:128] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I0820 01:25:30.727247 3178467 manifests.go:128] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0820 01:25:30.727251 3178467 manifests.go:128] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0820 01:25:30.727254 3178467 manifests.go:128] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0820 01:25:30.729532 3178467 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I0820 01:25:30.729559 3178467 manifests.go:102] [control-plane] getting StaticPodSpecs
I0820 01:25:30.729791 3178467 manifests.go:128] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0820 01:25:30.730956 3178467 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I0820 01:25:30.730970 3178467 kubelet.go:68] Stopping the kubelet
I0820 01:25:30.949012 3178467 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:109
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:124
github.com/spf13/cobra.(*Command).execute
	github.com/spf13/cobra@v1.7.0/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
	github.com/spf13/cobra@v1.7.0/command.go:1068
github.com/spf13/cobra.(*Command).Execute
	github.com/spf13/cobra@v1.7.0/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
	k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
	k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	runtime/proc.go:271
runtime.goexit
	runtime/asm_amd64.s:1695
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:124
github.com/spf13/cobra.(*Command).execute
	github.com/spf13/cobra@v1.7.0/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
	github.com/spf13/cobra@v1.7.0/command.go:1068
github.com/spf13/cobra.(*Command).Execute
	github.com/spf13/cobra@v1.7.0/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
	k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
	k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	runtime/proc.go:271
runtime.goexit
	runtime/asm_amd64.s:1695

Error Logs from kubelet

I0820 01:25:31.562757 3179613 kubelet.go:402] “Kubelet is running in standalone mode, will skip”
I0820 01:25:31.564391 3179613 kubelet.go:1618] “No API server defined - no node status update”
I0820 01:25:31.563669 3179613 volume_host.go:77] “KubeClient is nil. Skip initialization of CSID”
E0820 01:25:31.639179 3179613 kubelet.go:2361] “Skipping pod synchronization” err=“[container runtime is down]”
I0820 01:25:31.574185 3179613 factory.go:219] Registration of the crio container factory failed:
E0820 01:25:31.571963 3179613 kubelet.go:1462] “Image garbage collection failed once. Stats init”

For my case, the struggle was from a misconfiguration kubelet is configured to use cgroupfs cgroup driver while containerd is configured to use sytemd cgroup driver.. This helped me resolved the issue: Impossible to create or start a container after reboot (OCI runtime create failed: expected cgroupsPath to be of format \"slice:prefix:name\" for systemd cgroups, got \"/kubepods/burstable/...") · Issue #4857 · containerd/containerd · GitHub.

Found this failure lead in SYSTEMD_LESS=FRXMK journalctl -xeu kubelet

2 Likes

Hi! Dear andersonvm .I am a young man from China who has just started learning about K8s. I encountered an issue with the initialization of kubeadm init, and I am not sure how to contact you. Would it be convenient for me? Ubuntu 22.04 Keepalived+VIP is active, haproxy is active, firewall policy is turned off “Unable to register node with API server” err=“Post "https://192.168.31.99:16443/api/v1/nodes\”: dial tcp 192.168.31.99:16443: connect: connection refused" node=“master1”

did u get a fix for this?

Hi. I hope your success. I am beginner for kubernetes. I was successful online, but I was failed offline. The version is v1.31.0. I hope your help.

I followed the official tutorial for all installations and encountered the same error. the API server is not healthy after 4m0.001071618s

As general advice to people experiencing an issue on this thread: I consider CRI-O to be a better choice than containerd. It did in fact solve my problem of the API server not starting. IDK why. But it also seems to be a better choice in general: more modern, rather than extracted out of the Docker stack, less cruft from that stack.

I can’t say that it will solve the problems of everyone in this thread who has an issue; I see that some are already using it.

My experience was with k8s & CRI-O 1.32.2.