Fail on the Kubeadm init process - timeout on waiting for the kubelet to boot up, from fresh new GCP debian OS

Asking for help? Comment out what you need so we can get more information to help you!

Cluster information:

GCP Compute engine
OS: debian-11-bullseye-v20230306
CPU: c3-highcpu-4/x86/64
RAM: 4G
Storage: 10G

Kubernetes version: v1.26.3
Cloud being used: GCP Compute engine
Installation method: Kubeadm
Host OS: debian-11-bullseye-v20230306 (GCP Compute engine)
CNI and version:
sudo swapoff -a and following “Container Runtimes | Kubernetes
CRI and version: following “GitHub - Mirantis/cri-dockerd: dockerd as a compliant Container Runtime Interface for Kubernetes

You can format your yaml by highlighting it and pressing Ctrl-Shift-C, it will make your output easier to read.
Follow the installation process

  1. Install Docker Engine
  2. To disable the swap
  3. Configure the forwarding IPv4 and letting iptables see bridged traffic
  4. Install cri-dockerd for Docker
  5. Installing kubeadm, kubelet and kubectl
  6. execute - sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --cri-socket=unix:///var/run/cri-dockerd.sock --apiserver-advertise-address=

I used to be able to complete the installation process (20 days ago) but now I seem to not able and keep seeing below error messages. As I am doing the installation on a fresh new VM from GCP debian OS, I couldn’t find similar issue and resolution from internet. I am not sure where I went wrong, could experts please help to have a look at my issue and advise? Thanks.


[init] Using Kubernetes version: v1.26.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.128.0.25]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] Generating “etcd/ca” certificate and key
[certs] Generating “etcd/server” certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [10.128.0.25 127.0.0.1 ::1]
[certs] Generating “etcd/peer” certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [10.128.0.25 127.0.0.1 ::1]
[certs] Generating “etcd/healthcheck-client” certificate and key
[certs] Generating “apiserver-etcd-client” certificate and key
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- ‘systemctl status kubelet’
- ‘journalctl -xeu kubelet’

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- ‘crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause’
Once you have found the failing container, you can inspect its logs with:
- ‘crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID’
error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

is this problem already solved?
i got the same problem on my ubuntu 22.04 AWS and GCP

Add RHEL 8 with cri-o and after that trial containerd. Now we have at least two different distributions, rpm-based as well as deb-based and oh… nearly all currently available cri-implementations with identical error behavior.

There are only two components involved in all three cases. kubeadm and kubelet. May something get wrong with last kubelet refactoring?