The connection to the server 192.168.1.2:6443 was refused - did you specify the right host or port?

I am trying to install and run Kubernetes on my Ubuntu 22.04LTS machine. I followed the installation link I specified below, but I get the following errors when I try kubectl get pods or kubectl get nodes for example:


E1210 10:35:25.649853   16219 memcache.go:238] couldn't get current server API group list: Get "https://192.168.1.2:6443/api?timeout=32s": dial tcp 192.168.1.2:6443: connect: connection refused
E1210 10:35:25.650138   16219 memcache.go:238] couldn't get current server API group list: Get "https://192.168.1.2:6443/api?timeout=32s": dial tcp 192.168.1.2:6443: connect: connection refused
E1210 10:35:25.651789   16219 memcache.go:238] couldn't get current server API group list: Get "https://192.168.1.2:6443/api?timeout=32s": dial tcp 192.168.1.2:6443: connect: connection refused
E1210 10:35:25.653224   16219 memcache.go:238] couldn't get current server API group list: Get "https://192.168.1.2:6443/api?timeout=32s": dial tcp 192.168.1.2:6443: connect: connection refused
E1210 10:35:25.654932   16219 memcache.go:238] couldn't get current server API group list: Get "https://192.168.1.2:6443/api?timeout=32s": dial tcp 192.168.1.2:6443: connect: connection refused
The connection to the server 192.168.1.2:6443 was refused - did you specify the right host or port?

Cluster information:

Kubernetes version:
kubeadm version: &version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.0", GitCommit:"b46a3f887ca979b1a5d14fd39cb1af43e7e5d12d", GitTreeState:"clean", BuildDate:"2022-12-08T19:57:06Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"linux/amd64"}

Cloud being used: localhost

Installation method: Installing kubeadm | Kubernetes

Host OS:

No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 22.04.1 LTS
Release:	22.04
Codename:	jammy

CNI and version:
CRI and version:

This is /.kube/config file:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakN>
    server: https://192.168.1.2:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQW>
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0>

This is /etc/containerd/config.toml file:


[plugins]
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]

  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
      SystemdCgroup = true

  [plugins."io.containerd.grpc.v1.cri"]
  sandbox_image = "registry.k8s.io/pause:3.2"

When I try systemctl status kubelet, I get the following result:

â—Ź kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Sat 2022-12-10 16:46:24 PST; 9s ago
Docs: Kubernetes Documentation | Kubernetes
Process: 6420 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
Main PID: 6420 (code=exited, status=1/FAILURE)
CPU: 103ms

And the content of /etc/systemd/system/kubelet.service.d/10-kubeadm.conf:


  GNU nano 6.2                                           /etc/systemd/system/kubelet.service.d/10-kubeadm.conf                                                    
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

This is the result for journalctl -u kubelet command:


Dec 06 22:14:42 a systemd[1]: Started kubelet: The Kubernetes Node Agent.
Dec 06 22:14:43 a kubelet[85576]: I1206 22:14:43.254559   85576 server.go:413] "Kubelet version" kubeletVersion="v1.25.4"
Dec 06 22:14:43 a kubelet[85576]: I1206 22:14:43.254699   85576 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Dec 06 22:14:43 a kubelet[85576]: I1206 22:14:43.255385   85576 server.go:576] "Standalone mode, no API client"
Dec 06 22:14:43 a kubelet[85576]: I1206 22:14:43.295733   85576 server.go:464] "No api server defined - no events will be sent to API server"
Dec 06 22:14:43 a kubelet[85576]: I1206 22:14:43.295813   85576 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Dec 06 22:14:43 a kubelet[85576]: E1206 22:14:43.296299   85576 run.go:74] "command failed" err="failed to run Kubelet: running with swap on is not supported, please disable swap! or set --f>
Dec 06 22:14:43 a systemd[1]: kubelet.service: Current command vanished from the unit file, execution of the command list won't be resumed.
Dec 06 22:14:43 a systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 22:14:43 a systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 06 22:14:43 a systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Dec 06 22:14:43 a systemd[1]: Started kubelet: The Kubernetes Node Agent.
Dec 06 22:14:43 a kubelet[85623]: E1206 22:14:43.688813   85623 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubele>
Dec 06 22:14:43 a systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 22:14:43 a systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 06 22:14:53 a systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Dec 06 22:14:53 a systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Dec 06 22:14:53 a systemd[1]: Started kubelet: The Kubernetes Node Agent.
Dec 06 22:14:53 a kubelet[85707]: E1206 22:14:53.884675   85707 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubele>
Dec 06 22:14:53 a systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE

Hello I have the exact same issue on same version of ubuntu server.
I have also all my pods kube-system is going down and up randomly

I did a lot of things and finally I found that, the problem is because of the kublet service. And the problem for kubelet service one of the CNI or Docker service. I tried to remove and reinstall them again, then kubeadm init command worked.

I think the best way to resolve Kubernetes problems is checking related services status like kubelet , docker, containerd , CNI and if they all are fine and work correctly, I think kubectl also should work fine as well. If they have problems, it’s better to reconfigure them to resolve their problems.

I found also the issue, for me it was coming from kernel version and contairnerd config.
This link describe all step to install kubernetes cluster on debian 11.
install kube on debian 11

You need to disable swap on the system for kubelet to work. You can disable swap with sudo swapoff -a and restart kubelet service sudo systemctl restart kubelet

2 Likes

I am also facing same issue and found below steps helped me to get over come from this issue. it is issue with “kubectl” config even you install packages, need to performed more steps to get worked.

Solution:

Step1: On “Worker node/Compute node” look for file "/etc/kubernetes/admin.conf ", If it is not available than “Copy paste/SCP” from your “Controller node/master node”.

Step2: On “Worker Node/Compute node” create a directory called “.kube” on user home directory, if it is already exist than Not to worry and move inside this directory.

mkdir ~/.kube
or,
mkdir -p $HOME/.kube

cd ~/.kube

Step3: Create file called “config” and paste the content of “/etc/kubernetes/admin.conf”.
vi config

Note:- while Copy the content on config file make sure the below mention parameter has correct master IP mentioned, Some time it is localhost but try to update master IP.

E.g:
server: https://192.168.1.247:6443
name: kubernetes

Thank you, Hope this will work for you.

1 Like

In my case, firewall was the real culprit:

  • If you are trying to connect to a remote server, first try to use telnet command to check if you can connect to that port.
  • If its refusing connection, check if the port you are trying to connect is listed:
sudo ufw status
  • If its not listed, just add it
sudo ufw allow 6443
  • You can deny access on certain ports as well:
sudo ufw deny [port]

Righttt ! Thank you. I forgot about this. how would we permanently disable swapoff it came back after restart/reboot

Edit: I searched and this is how to disable swap and persist it

sudo nano /etc/fstab
and

comment line for swap by adding “#” in front.

#/swap.img none swap sw 0 0