Hello,
Im trying to setup kubernetes with kubeadm. Therefore I used the following command
sudo kubeadm init --apiserver-advertise-address=172.30.1.2 --pod-network-cidr 10.244.0.0/16 --node-name controlplane --ignore-preflight-errors Swap
I get the following error:
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Tue 2025-04-15 21:35:23 CEST; 7min ago
Docs: https://kubernetes.io/docs/
Main PID: 19135 (kubelet)
Tasks: 10 (limit: 9438)
Memory: 30.3M
CPU: 3.446s
CGroup: /system.slice/kubelet.service
└─19135 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --hostname-override=controlplane --pod-infra-container-image=registry.k8s.io/pause:3.9
Apr 15 21:43:11 t-VirtualBox kubelet[19135]: E0415 21:43:11.445637 19135 event.go:289] Unable to write event: ‘&v1.Event{TypeMeta:v1.TypeMeta{Kind:“”, APIVersion:“”}, ObjectMeta:v1.ObjectMeta{Name:“controlplane.183694fddc2f9d3c”, GenerateName:“”, Namespace:“default”, SelfLink:“”, UID:“”, ResourceVersion:“”, Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:v1.OwnerReference(nil), Finalizers:string(nil), ManagedFields:v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:“Node”, Namespace:“”, Name:“controlplane”, UID:“controlplane”, APIVersion:“”, ResourceVersion:“”, FieldPath:“”}, Reason:“Starting”, Message:“Starting kubelet.”, Source:v1.EventSource{Component:“kubelet”, Host:“controlplane”}, FirstTimestamp:time.Date(2025, time.April, 15, 21, 35, 23, 694390588, time.Local), LastTimestamp:time.Date(2025, time.April, 15, 21, 35, 23, 694390588, time.Local), Count:1, Type:“Normal”, EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:“”, Related:(*v1.ObjectReference)(nil), ReportingController:“kubelet”, ReportingInstance:“controlplane”}’: ‘Post “https://172.30.1.2:6443/api/v1/namespaces/default/events”: dial tcp 172.30.1.2:6443: i/o timeout’(may retry after sleeping)
Apr 15 21:43:11 t-VirtualBox kubelet[19135]: E0415 21:43:11.445751 19135 event.go:228] Unable to write event ‘&v1.Event{TypeMeta:v1.TypeMeta{Kind:“”, APIVersion:“”}, ObjectMeta:v1.ObjectMeta{Name:“controlplane.183694fddc2f9d3c”, GenerateName:“”, Namespace:“default”, SelfLink:“”, UID:“”, ResourceVersion:“”, Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:v1.OwnerReference(nil), Finalizers:string(nil), ManagedFields:v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:“Node”, Namespace:“”, Name:“controlplane”, UID:“controlplane”, APIVersion:“”, ResourceVersion:“”, FieldPath:“”}, Reason:“Starting”, Message:“Starting kubelet.”, Source:v1.EventSource{Component:“kubelet”, Host:“controlplane”}, FirstTimestamp:time.Date(2025, time.April, 15, 21, 35, 23, 694390588, time.Local), LastTimestamp:time.Date(2025, time.April, 15, 21, 35, 23, 694390588, time.Local), Count:1, Type:“Normal”, EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:“”, Related:(*v1.ObjectReference)(nil), ReportingController:“kubelet”, ReportingInstance:“controlplane”}’ (retry limit exceeded!)
Apr 15 21:43:12 t-VirtualBox kubelet[19135]: I0415 21:43:12.713287 19135 scope.go:117] “RemoveContainer” containerID=“c530af2157225f3e805f8c612a9e3d158495e1573b51a3e3d691a8bcff173d85”
Apr 15 21:43:12 t-VirtualBox kubelet[19135]: E0415 21:43:12.713642 19135 pod_workers.go:1300] “Error syncing pod, skipping” err=“failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 5m0s restarting failed container=etcd pod=etcd-controlplane_kube-system(529ac807e16d990abdbfb33a0ff5f2a2)"” pod=“kube-system/etcd-controlplane” podUID=“529ac807e16d990abdbfb33a0ff5f2a2”
Apr 15 21:43:13 t-VirtualBox kubelet[19135]: E0415 21:43:13.833704 19135 eviction_manager.go:258] “Eviction manager: failed to get summary stats” err=“failed to get node info: node "controlplane" not found”
Apr 15 21:43:14 t-VirtualBox kubelet[19135]: E0415 21:43:14.504890 19135 controller.go:146] “Failed to ensure lease exists, will retry” err=“Get "https://172.30.1.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/controlplane?timeout=10s\”: dial tcp 172.30.1.2:6443: connect: network is unreachable" interval=“7s”
Apr 15 21:43:14 t-VirtualBox kubelet[19135]: E0415 21:43:14.567377 19135 event.go:289] Unable to write event: ‘&v1.Event{TypeMeta:v1.TypeMeta{Kind:“”, APIVersion:“”}, ObjectMeta:v1.ObjectMeta{Name:“controlplane.183694fddce0065e”, GenerateName:“”, Namespace:“default”, SelfLink:“”, UID:“”, ResourceVersion:“”, Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:v1.OwnerReference(nil), Finalizers:string(nil), ManagedFields:v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:“Node”, Namespace:“”, Name:“controlplane”, UID:“controlplane”, APIVersion:“”, ResourceVersion:“”, FieldPath:“”}, Reason:“InvalidDiskCapacity”, Message:“invalid capacity 0 on image filesystem”, Source:v1.EventSource{Component:“kubelet”, Host:“controlplane”}, FirstTimestamp:time.Date(2025, time.April, 15, 21, 35, 23, 705951838, time.Local), LastTimestamp:time.Date(2025, time.April, 15, 21, 35, 23, 705951838, time.Local), Count:1, Type:“Warning”, EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:“”, Related:(*v1.ObjectReference)(nil), ReportingController:“kubelet”, ReportingInstance:“controlplane”}’: ‘Post “https://172.30.1.2:6443/api/v1/namespaces/default/events”: dial tcp 172.30.1.2:6443: connect: network is unreachable’(may retry after sleeping)
Apr 15 21:43:16 t-VirtualBox kubelet[19135]: I0415 21:43:16.740114 19135 scope.go:117] “RemoveContainer” containerID=“1313849d5d6d3a237dd4a8fb8eec0b9910ea17b15d47b7db439b8bb39ea1bbf4”
Apr 15 21:43:16 t-VirtualBox kubelet[19135]: I0415 21:43:16.741254 19135 scope.go:117] “RemoveContainer” containerID=“958f663795738fbbb913c8d0844c953b55db6cadf86d21895e9040cb0cbee929”
Apr 15 21:43:16 t-VirtualBox kubelet[19135]: E0415 21:43:16.741545 19135 pod_workers.go:1300] “Error syncing pod, skipping” err=“failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-controlplane_kube-system(9a88e07f5276c078257773c01e4ca676)"” pod=“kube-system/kube-apiserver-controlplane” podUID=“9a88e07f5276c078257773c01e4ca676”
Apr 15 21:28:11 t-VirtualBox kubelet[17079]: E0415 21:28:11.555940 17079 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.30.1.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/controlplane?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s"
Apr 15 21:28:13 t-VirtualBox kubelet[17079]: W0415 21:28:13.211658 17079 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.30.1.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.1.2:6443: i/o timeout
Apr 15 21:28:13 t-VirtualBox kubelet[17079]: I0415 21:28:13.211731 17079 trace.go:236] Trace[533055001]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:150 (15-Apr-2025 21:27:43.210) (total time: 30001ms):