Problems creating my first cluster

Kubernetes version: 1.29
Cloud being used: bare-metal
Installation method: gentoo portage + kubeadm
Host OS: gentoo linux current
CNI and version: not sure what this is
CRI and version: containerd v1.7.1

i follow www.linuxtechi.com install-kubernetes-cluster-on-debian as it is more detailed than kubernetes.io docs setup production-environment tools kubeadm create-cluster-kubeadm but still both on debian and gentoo i fail to create the control plane; kubeadm output:

[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

cant paste logs as i get an error statig that i can paste up to 5 links

what am i missing? what am i doing wrong? it looks as if it was ready but still times out

1 Like

Blockquote

box /home/user1 # kubeadm init --config kubelet.yaml --ignore-preflight-errors=FileContent–proc-sys-net-bridge-bridge-nf-call-iptables --ignore-preflight-errors=Port-10250
[init] Using Kubernetes version: v1.29.0
[preflight] Running pre-flight checks
[WARNING FileExisting-ethtool]: ethtool not found in system path
[WARNING Hostname]: hostname “box” could not be reached
[WARNING Hostname]: hostname “box”: lookup box on 192.168.8.1:53: no such host
[WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
W0227 23:16:15.926702 55879 checks.go:835] detected that the sandbox image “registry.k8s.io/pause:3.8” of the container runtime is inconsistent with that used by kubeadm. It is recommended that using “registry.k8s.io/pause:3.9” as the CRI sandbox image.
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [box kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.8.200 192.168.122.1]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] Generating “etcd/ca” certificate and key
[certs] Generating “etcd/server” certificate and key
[certs] etcd/server serving cert is signed for DNS names [box localhost] and IPs [192.168.8.200 127.0.0.1 ::1]
[certs] Generating “etcd/peer” certificate and key
[certs] etcd/peer serving cert is signed for DNS names [box localhost] and IPs [192.168.8.200 127.0.0.1 ::1]
[certs] Generating “etcd/healthcheck-client” certificate and key
[certs] Generating “apiserver-etcd-client” certificate and key
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “super-admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- ‘systemctl status kubelet’
- ‘journalctl -xeu kubelet’

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- ‘crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause’
Once you have found the failing container, you can inspect its logs with:
- ‘crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID’
error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

Blockquote

@Dawid_Le have you been able to resolve this yet? It appears you might have run the “kubeadm init” CMD without sudo permission, and secondly there is a version/compatibility mismatch between the CRI and Kubeadm versions. Your installed Kubeadm version1.29 requires higher CRI sandbox image. You could rollback/reset Kubeadm to a version that is compatible with the CRI or do the opposite for the CRI. Best is to use an old stable Kubeadm version such as v1.26.15. Lastly, make sure to check that the CRI service is running with CMD “systemctl status containerd.service” (for Ubuntu when using containerd as CR) and debug or reinstall CRI if CRI service is not in running state. Best!

hello fotettey
thanks for coming back to me on this
i managed to resolve by choosing cidr 10.0.0.0 and first joining the cluster before installing calico (not sure which one helped)

ok super, glad to hear. Cheers!