[ERROR] kubeadm init error: connection refused

Cluster information:

Kubernetes version: 1.26.3
Cloud being used: GCP and AWS
Installation method: Official Docker & K8s, with cri-docker
Host OS: Ubuntu 22.04 (Jammy)
CNI and version: latest version
CRI and version: latest version

hello, i’ve same problem
i’m using ubuntu, and cri dockerd, also using latest version kubernetes
i’ve been try to:

  • allow all ports
  • deactivate ufw
  • kubeadm reset --cri-socket unix:///var/run/cri-dockerd.sock
  • kubeadm init --cri-socket unix:///var/run/cri-dockerd.sock
  • try to rebuild vm
  • try on gcp also aws
  • checked forum and try many same advice
    but still having problem
    and this is my output container logs

please help me
thank you very much!

Ubuntu 22.04 defaults to cgroups v2, did you switch it to v1 or configure it v2?

when i checked it’s not showing v2 anymore,
is that mean i used v1 for default cgroups?

root@master:~# sudo cat /sys/fs/cgroup/cgroup.controllers
cpuset cpu io memory hugetlb pids rdma misc

can you run: stat -fc %T /sys/fs/cgroup/

i decide create from scratch again on my AWS but still same error, here’s my output if i ran that command:

ubuntu@ip-172-31-25-66:~$  stat -fc %T /sys/fs/cgroup/
cgroup2fs

and here’s my kubelet services status

ubuntu@ip-172-31-25-66:~$ sudo service kubelet status
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: activating (auto-restart) (Result: exit-code) since Mon 2023-04-03 21:21:27 UTC; 8s ago
       Docs: https://kubernetes.io/docs/home/
    Process: 11309 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
   Main PID: 11309 (code=exited, status=1/FAILURE)
        CPU: 79ms

You’ll need to make sure both Kubernetes and your container runtime are set to use systemd as the cgroup driver.

I’m not sure about instructions with docker, but for containerd you need to use 1.4+ and set the option in the config

let met double check for that cgroup driver,

i’ve another question about this problem :thinking: what should i do?

root@ip-172-31-25-66:/home/ubuntu# kubectl get nodes
E0404 02:46:09.711083   16632 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0404 02:46:09.711632   16632 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0404 02:46:09.713163   16632 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0404 02:46:09.714814   16632 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0404 02:46:09.716360   16632 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?

i try to open all my ports, also disable all my firewall (ufw), but still same :thinking:

hmm i didn’t setting anything
but i tried to install minikube
and it solve all my problems

ubuntu@ip-172-31-25-66:~$ minikube start
* minikube v1.30.0 on Ubuntu 22.04 (xen/amd64)
* Automatically selected the docker driver. Other choices: none, ssh
* Using Docker driver with root privileges
* Starting control plane node minikube in cluster minikube
* Pulling base image ...
* Downloading Kubernetes v1.26.3 preload ...
    > preloaded-images-k8s-v18-v1...:  397.02 MiB / 397.02 MiB  100.00% 22.42 M
    > gcr.io/k8s-minikube/kicbase...:  373.53 MiB / 373.53 MiB  100.00% 7.13 Mi

* Creating docker container (CPUs=2, Memory=2200MB) .../ 

! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0 -> Actual minikube version: v1.30.0
* Preparing Kubernetes v1.26.3 on Docker 23.0.2 ...
  - Generating certificates and keys ...
  - Booting up control plane .../ 

  - Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Verifying Kubernetes components...
* Enabled addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

ubuntu@ip-172-31-25-66:~$ kubectl get pods -A
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-787d4945fb-7h4bs           1/1     Running   0          81s
kube-system   etcd-minikube                      1/1     Running   0          97s
kube-system   kube-apiserver-minikube            1/1     Running   0          111s
kube-system   kube-controller-manager-minikube   1/1     Running   1          2m14s
kube-system   kube-proxy-65vcv                   1/1     Running   0          81s
kube-system   kube-scheduler-minikube            1/1     Running   0          97s
kube-system   storage-provisioner                1/1     Running   0          85s

ubuntu@ip-172-31-25-66:~$ kubectl get nodes
NAME       STATUS   ROLES           AGE     VERSION
minikube   Ready    control-plane   4m30s   v1.26.3

wow hmm so tricky :thinking:
finally solved after 24 hours

new problem when i checked on crictl it showing exited, but now i could using command kubectl

ubuntu@ip-172-31-25-66:~$ crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
ab9510ca5b943       fce326961ae2d       33 seconds ago      Exited              etcd                      17                  1653f36fe5117       etcd-ip-172-31-25-66
c24f944845830       ce8c2293ef09c       36 seconds ago      Exited              kube-controller-manager   17                  bea0890e241e0       kube-controller-manager-ip-172-31-25-66
1e7c16658c126       1d9b3cbae03ce       2 minutes ago       Exited              kube-apiserver            14                  eb764804e66e7       kube-apiserver-ip-172-31-25-66
fcac31ae135ac       5a79047369329       4 minutes ago       Exited              kube-scheduler            14                  fb6dd14bf5768       kube-scheduler-ip-172-31-25-66
ubuntu@ip-172-31-25-66:~$ kubectl get pods -A
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-787d4945fb-7h4bs           1/1     Running   0          56m
kube-system   etcd-minikube                      1/1     Running   0          57m
kube-system   kube-apiserver-minikube            1/1     Running   0          57m
kube-system   kube-controller-manager-minikube   1/1     Running   1          57m
kube-system   kube-proxy-65vcv                   1/1     Running   0          56m
kube-system   kube-scheduler-minikube            1/1     Running   0          57m
kube-system   storage-provisioner                1/1     Running   0          56m
ubuntu@ip-172-31-25-66:~$ kubectl get nodes
NAME       STATUS   ROLES           AGE   VERSION
minikube   Ready    control-plane   58m   v1.26.3