Can not install kubernetes cluster with kubeadm

I am using containerd as container engine and I want to install kuber 1.27 but due installation it is showing follow error :

W0503 15:33:13.738454   20850 version.go:104] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://dl.k8s.io/release/stable-1.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W0503 15:33:13.738543   20850 version.go:105] falling back to the local client version: v1.27.1
[init] Using Kubernetes version: v1.27.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0503 15:33:13.847858   20850 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.1, falling back to the nearest etcd version (3.5.7-0)
W0503 15:33:13.963431   20850 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "172.20.68.202:80/kuber/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local ms] and IPs [10.96.0.1 172.20.68.211]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost ms] and IPs [172.20.20.20 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost ms] and IPs [172.20.20.20 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
W0503 15:33:18.512597   20850 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.1, falling back to the nearest etcd version (3.5.7-0)
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher


Also this is my kubelet status :


[root@ms manifests]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Wed 2023-05-03 15:33:18 +0330; 5min ago
     Docs: https://kubernetes.io/docs/
 Main PID: 21020 (kubelet)
    Tasks: 15 (limit: 49224)
   Memory: 41.5M
   CGroup: /system.slice/kubelet.service
           └─21020 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/containerd/co>

May 03 15:38:25 ms kubelet[21020]: E0503 15:38:25.371260   21020 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.20.20.20:6443/api/v1/nodes\": dial tcp 172.20.20.20:6443: connect: connectio>
May 03 15:38:28 ms kubelet[21020]: E0503 15:38:28.693132   21020 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ms\" not found"
May 03 15:38:30 ms kubelet[21020]: W0503 15:38:30.505663   21020 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.20.20.20:6443/apis/node.k8s.io/v1/runtimeclasses?lim>
May 03 15:38:30 ms kubelet[21020]: E0503 15:38:30.505740   21020 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.20.20.20:6443/apis>
May 03 15:38:32 ms kubelet[21020]: E0503 15:38:32.219053   21020 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.20.20.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ms?timeo>
May 03 15:38:32 ms kubelet[21020]: I0503 15:38:32.372936   21020 kubelet_node_status.go:70] "Attempting to register node" node="ms"
May 03 15:38:32 ms kubelet[21020]: E0503 15:38:32.373211   21020 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.20.20.20:6443/api/v1/nodes\": dial tcp 172.20.20.20:6443: connect: connectio>
May 03 15:38:32 ms kubelet[21020]: E0503 15:38:32.957694   21020 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ms.175ba0899d276714", GenerateName:"", Namespa>
May 03 15:38:34 ms kubelet[21020]: W0503 15:38:34.260597   21020 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.20.20.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=5>
May 03 15:38:34 ms kubelet[21020]: E0503 15:38:34.260672   21020 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.20.20.20:6443/apis/stora>
lines 1-22/22 (END)


Hmm, from your “pre-flight” logs I’d guess something went wrong with the update of the core components. So (probably) some were updated and some not. K8s will not work with “mixed versions”. They all need to fit.

So my suggestion would be to delete the cluster (if possible), upgrade kubelet, kubeadm and kubectl - and re-initializing the cluster.
Also, I had some trouble with containerd so I switched to CRI-O instead. That also may have some impact.

Thanks. But all of them are using same version :

kubelet-1.27.1-0.x86_64
kubeadm-1.27.1-0.x86_64
kubectl-1.27.1-0.x86_64

well, I had issues with containered as well … until I switched tor CRI-O.
Idk if that’s an option for you.

Anyone facing this issue? Did we resovle it?
I have been struggling to install K8S1.27.3 for the last 3 days.

i am facing same issue. everythin works but coredns and calico still have issue and frequent down of cluster.i beleive , ram issue. i am using laptop 8g ram