Kubeadm init fails

I am trying to Initializing Kubernetes cluster but I get this error, how come?
how do i solve?

[root@master-node ~]# kubeadm init
[init] Using Kubernetes version: v1.21.3
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at Container Runtimes | Kubernetes
[WARNING FileExisting-tc]: tc not found in system path
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[ERROR Port-10259]: Port 10259 is in use
[ERROR Port-10257]: Port 10257 is in use
[ERROR FileAvailable–etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[ERROR DirAvailable–var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher

Something is hosting the ports mentioned by the errors. Use netstat to find out what it is.

Linux is a very important competency if you want to do anything with k8s. I recommend reading this book. The guy that wrote it gives away a free PDF copy and it covers things like netstat and process management, which you’re about to need.

$ netstat -plnt

what do you think is blocking the port?

[root@master-node ~]# netstat -plnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:10259         0.0.0.0:*               LISTEN      2718/kube-scheduler
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      801/sshd
tcp        0      0 127.0.0.1:43743         0.0.0.0:*               LISTEN      1041/kubelet
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      1041/kubelet
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      3426/kube-proxy
tcp        0      0 192.168.30.100:2379     0.0.0.0:*               LISTEN      2629/etcd
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      2629/etcd
tcp        0      0 192.168.30.100:2380     0.0.0.0:*               LISTEN      2629/etcd
tcp        0      0 127.0.0.1:2381          0.0.0.0:*               LISTEN      2629/etcd
tcp        0      0 127.0.0.1:10257         0.0.0.0:*               LISTEN      2730/kube-controlle
tcp6       0      0 :::22                   :::*                    LISTEN      801/sshd
tcp6       0      0 :::10250                :::*                    LISTEN      1041/kubelet
tcp6       0      0 :::6443                 :::*                    LISTEN      2716/kube-apiserver
tcp6       0      0 :::10256                :::*                    LISTEN      3426/kube-proxy

I have the same issue.
Anyone know how to solve this?

[ ~]$ sudo kubeadm init
I0921 14:19:30.915810 180547 version.go:255] remote version is much newer: v1.25.1; falling back to: stable-1.23
[init] Using Kubernetes version: v1.23.11
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[ERROR Port-10259]: Port 10259 is in use
[ERROR Port-10257]: Port 10257 is in use
[ERROR FileAvailable–etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[ERROR DirAvailable–var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher

I also face same issue when I initiate the kubeadm init[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
[WARNING Hostname]: hostname “master-node” could not be reached
[WARNING Hostname]: hostname “master-node”: lookup master-node on 1.1.1.1:53: no such host
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time=“2023-01-19T10:57:37+05:00” level=fatal msg=“unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/containerd/containerd.sock: connect: no such file or directory"”
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher

When I use the command - sudo kubeadm init I am getting the below error message. Any idea how can I solve this?

[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time=“2023-02-06T08:42:27Z” level=fatal msg=“validate service connection: CRI v1 runtime API is not implemented for endpoint "unix:///var/run/containerd/containerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService”
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher

Judging from this error message, it seems like you can’t connect to your container runtime:

 [ERROR CRI]: container runtime is not running: output: time=“2023-02-06T08:42:27Z” level=fatal msg=“validate service connection: CRI v1 runtime API is not implemented for endpoint "unix:///var/run/containerd/containerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService”

Most systems are on systemd these days, so these commands should be useful for you @Avaneesh_thakur_Rana

# get the status of containerd
systemctl status containerd

# enable on boot and start containerd now
systemctl enable --now containerd

# restart the containerd service
systemctl restart containerd 

# stop the containerd service
systemctl stop containerd 

# start the containerd service
systemctl start containerd 

I have ran the above command and kubeadm is not initialized

Without sharing your commands and output, there is no way for people to help you figure out what to do. There are many reasons init can fail.

check resource status 16G ram,8 CPU min ?

I also got this issue, additionally i had to manually kill kubelet , using

$ pkill kubelet

kubeadm init worked without issues after this,.

I also got this issue, additionally i had to manually kill kubelet , using

$ pkill kubelet

kubeadm init worked without issues after this,.

open the port=6443,10250,10246,2379,2380,2381 on firewall or stop the firewall then use kubeadm init --ignore-preflight-errors=all

Install docker, then run kubeadm init.

use kubeadm reset

Got this issue, i had tried to run init a couple of times with various errors. I eventually did a rest of kubeadm

"kubeadm reset cleanup-node"