Kubernetes version: 1.29
Cloud being used: bare-metal
Installation method: gentoo portage + kubeadm
Host OS: gentoo linux current
CNI and version: not sure what this is
CRI and version: containerd v1.7.1
i follow www.linuxtechi.com install-kubernetes-cluster-on-debian as it is more detailed than kubernetes.io docs setup production-environment tools kubeadm create-cluster-kubeadm but still both on debian and gentoo i fail to create the control plane; kubeadm output:
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory â/etc/kubernetes/manifestsâ. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
cant paste logs as i get an error statig that i can paste up to 5 links
what am i missing? what am i doing wrong? it looks as if it was ready but still times out
1 Like
Blockquote
box /home/user1 # kubeadm init --config kubelet.yaml --ignore-preflight-errors=FileContentâproc-sys-net-bridge-bridge-nf-call-iptables --ignore-preflight-errors=Port-10250
[init] Using Kubernetes version: v1.29.0
[preflight] Running pre-flight checks
[WARNING FileExisting-ethtool]: ethtool not found in system path
[WARNING Hostname]: hostname âboxâ could not be reached
[WARNING Hostname]: hostname âboxâ: lookup box on 192.168.8.1:53: no such host
[WARNING Service-Kubelet]: kubelet service is not enabled, please run âsystemctl enable kubelet.serviceâ
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using âkubeadm config images pullâ
W0227 23:16:15.926702 55879 checks.go:835] detected that the sandbox image âregistry.k8s.io/pause:3.8â of the container runtime is inconsistent with that used by kubeadm. It is recommended that using âregistry.k8s.io/pause:3.9â as the CRI sandbox image.
[certs] Using certificateDir folder â/etc/kubernetes/pkiâ
[certs] Generating âcaâ certificate and key
[certs] Generating âapiserverâ certificate and key
[certs] apiserver serving cert is signed for DNS names [box kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.8.200 192.168.122.1]
[certs] Generating âapiserver-kubelet-clientâ certificate and key
[certs] Generating âfront-proxy-caâ certificate and key
[certs] Generating âfront-proxy-clientâ certificate and key
[certs] Generating âetcd/caâ certificate and key
[certs] Generating âetcd/serverâ certificate and key
[certs] etcd/server serving cert is signed for DNS names [box localhost] and IPs [192.168.8.200 127.0.0.1 ::1]
[certs] Generating âetcd/peerâ certificate and key
[certs] etcd/peer serving cert is signed for DNS names [box localhost] and IPs [192.168.8.200 127.0.0.1 ::1]
[certs] Generating âetcd/healthcheck-clientâ certificate and key
[certs] Generating âapiserver-etcd-clientâ certificate and key
[certs] Generating âsaâ key and public key
[kubeconfig] Using kubeconfig folder â/etc/kubernetesâ
[kubeconfig] Writing âadmin.confâ kubeconfig file
[kubeconfig] Writing âsuper-admin.confâ kubeconfig file
[kubeconfig] Writing âkubelet.confâ kubeconfig file
[kubeconfig] Writing âcontroller-manager.confâ kubeconfig file
[kubeconfig] Writing âscheduler.confâ kubeconfig file
[etcd] Creating static Pod manifest for local etcd in â/etc/kubernetes/manifestsâ
[control-plane] Using manifest folder â/etc/kubernetes/manifestsâ
[control-plane] Creating static Pod manifest for âkube-apiserverâ
[control-plane] Creating static Pod manifest for âkube-controller-managerâ
[control-plane] Creating static Pod manifest for âkube-schedulerâ
[kubelet-start] Writing kubelet environment file with flags to file â/var/lib/kubelet/kubeadm-flags.envâ
[kubelet-start] Writing kubelet configuration to file â/var/lib/kubelet/config.yamlâ
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory â/etc/kubernetes/manifestsâ. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- âsystemctl status kubeletâ
- âjournalctl -xeu kubeletâ
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- âcrictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pauseâ
Once you have found the failing container, you can inspect its logs with:
- âcrictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERIDâ
error execution phase wait-control-plane: couldnât initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
Blockquote
@Dawid_Le have you been able to resolve this yet? It appears you might have run the âkubeadm initâ CMD without sudo permission, and secondly there is a version/compatibility mismatch between the CRI and Kubeadm versions. Your installed Kubeadm version1.29 requires higher CRI sandbox image. You could rollback/reset Kubeadm to a version that is compatible with the CRI or do the opposite for the CRI. Best is to use an old stable Kubeadm version such as v1.26.15. Lastly, make sure to check that the CRI service is running with CMD âsystemctl status containerd.serviceâ (for Ubuntu when using containerd as CR) and debug or reinstall CRI if CRI service is not in running state. Best!
hello fotettey
thanks for coming back to me on this
i managed to resolve by choosing cidr 10.0.0.0 and first joining the cluster before installing calico (not sure which one helped)
ok super, glad to hear. Cheers!