Hi please help me out on this, i’m getting this error while initialising kuberdm
sudo kubeadm init - -control-plane-endpoint=master-node --upload-certs
WO416 18:25:09.593645 30200 checks.go:835] detected that the sandbox image “regtstry.ks.lo/pause:3.8” of the contatner runtime is Inconststent with that used by kubeadn. It is reconnended that ustng “r egistry-k8s.1o/pause:3.9” as the CRI sandbox image. (certs] Using certtficateDtr folder “/etc/kubernetes/pk1” certs Generating [certs] “ca” certificate and key Generating “aptserver” certificate and key [certs] aptserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.sve kubernetes.default.sc.cluster.local master-node] and IPs [10.96.0.1 192.168.1.36] [certs] [certs] Generating “aptserver-kubelet-clent” certificate and key Generating “front-proxy-ca” certtficate and key certs [certs. Generating “front-proxy-client” certificate and key Generating “etcd/ca” certificate and key certs) Generating “etcd/server” [certs] certificate and key etcd/server serving cert is signed for DNS names [localhost master-node) and IPs {192.168.1.36 127.0.0.1 ::1) [certs] Generating “eted/peer” certificate and key [certs] [certs] enerater served eat chested nor oes nies and host naster-node) and IP§ [192-268.1.36 127.0.0.3 363) Generating “etcd/healthcheck-clent” certificate and key [certs Generating “aptserver-etcd-clent” certtficate and key [certs] Generating “sa” key and public key [kubeconf1g] Using kubeconfig folder “/etc/kubernetes” [kubeconfig Writing “admin.conf” kubeconfig file [kubeconfig] Wrtting “kubelet.conf” kubeconfig file 「kubeconftal writLng “controller-manager.conf” kubeconfig ftle Tkubeconfigl Writing “scheduler conf” kubeconfg file [etcd] Creating static Pod mantfest for local etcd in “/etc/kubernetes/mantfests” [control-plane] Using mantfest folder /etc/kubernetes/mantfests® [control-plane] Creating stattc Pod mantfest for “kube-aptserver” [control-plane] Creating static Pod mantfest for “kube-controller-manager” control-plane] Creating statte Pod mantfest for “kube-scheduler” [kubelet-start] Writing kubelet entronment file with flags to file "/var/Lb/kubelet/kubeadm-flags.env [kubelet-start] Writing kubelet configuration to file "/var/11b/kubelet/confg-yam?* [kubelet-start] Starting the kubelet [watt-control-plane] Waiting for the kubelet to boot up the control plane as statte Pods from dtrectory “/etc/kubernetes/nantfests”. This can take up to 4n0s [kubelet-check] Inttial timeout of 40s passed.
Unfortunately, an error has occurred: timed out watting for the condition Thts error is likely caused by:
kubelet is not running The kubelet is unhealthy due to a misconfiguration of the node in some way (requtred cgroups disabled)
If you are on a systend-powered system, you can try to troubleshoot the error with the following commands: ‘systemctl status kubelet’ • Journalctl -xeu kubelet’
Addittonally, a control plane component may have crashed or exted when started by the contatner runtime. To troubleshoot, list all contatners using your preferred container runtimes CLI.
Here 1s one example how you may list all running Kubernetes contatners by using crictl: ‘crictl •-runtime-endpotnt untx:///var/run/contatnerd/contatnerd.sock ps -a | grep kube | grep •v pause’ Once you have found the fatling contatner, you can inspect its logs with: •crictl•-runtime-endpotnt untx:///var/run/contatnerd/contatnerd.sock logs CONTAINERID*
error executton phase watt-control-plane: couldn’t intttaltze a Kubernetes cluster To see the stack trace of this error execute with • -vas or higher
The status of kubelet and docker is active(running)
Kubectl kubelet version v1.29