The connection to the server <host>:6443 was refused - did you specify the right host or port?

use the below, config file need to export

root@ubuntu:~# vim /etc/kubernetes/admin.conf
root@ubuntu:~# vim $HOME/.kube/config
root@ubuntu:~# export KUBECONFIG=/etc/kubernetes/admin.conf
root@ubuntu:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu Ready master 7d21h v1.16.2

Thank you for the reply.But still same issue.

Make a script auto start master node after reboot :
vi /etc/rc.local

Copy and paste:
#!/bin/bash
swapoff -a
systemctl start kubelet
docker start (docker ps -a -q)
docker start (docker ps -a -q)

Change mode:
chmod +x /etc/rc.local

image

Hope it can help you!

1 Like

export KUBECONFIG=/etc/kubernetes/kubelet.conf

i had the same issue. I somehow resolved by enable the k8s ports on firewall.

kubeadm reset
systemctl enable firewalld|
systemctl start firewalld|
firewall-cmd --permanent --add-port=6443/tcp|
firewall-cmd --permanent --add-port=2379-2380/tcp|
firewall-cmd --permanent --add-port=10250-10255/tcp|
firewall-cmd –reload

hopefully, it can help you.

  1. master
    kubeadm reset
    kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.211.40 --kubernetes-version=v1.18.0

        kubeadm join 192.168.211.40:6443 --token s7apx1.mlxn2jkid6n99fr0 \
            --discovery-token-ca-cert-hash sha256:2fa9da39110d02efaf4f8781aa50dd25cce9be524618dc7ab91a53e81c5c22f8 
    
        $ mkdir -p $HOME/.kube
        $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

node1

$ kubeadm reset
$ kubeadm join 192.168.211.40:6443 --token s7apx1.mlxn2jkid6n99fr0 \
    --discovery-token-ca-cert-hash sha256:2fa9da39110d02efaf4f8781aa50dd25cce9be524618dc7ab91a53e81c5c22f8 

node2

$ kubeadm reset
$ kubeadm join 192.168.211.40:6443 --token s7apx1.mlxn2jkid6n99fr0 \
    --discovery-token-ca-cert-hash sha256:2fa9da39110d02efaf4f8781aa50dd25cce9be524618dc7ab91a53e81c5c22f8 

master

$ kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   5m18s   v1.18.6
node1    Ready    <none>   81s     v1.18.6
node2    Ready    <none>   43s     v1.18.6
$ scp /root/.kube/config root@192.168.211.41:/root/.kube/config
$ scp /root/.kube/config root@192.168.211.42:/root/.kube/config

detail more:[https://ghostwritten.blog.csdn.net/article/details/111186072](https://The connection to the server ip:6443 was refused - did you specify the right host or port)

Thanks Sage. This is wonderful. Restarting kubelet.service fixed my issue.

You just need to kill kubelet service and restart again. pods and container will be running as well as before reboot.

pkill kubelet

and

systemctl restart kubelet

good luck

I inadvertently corrupted my /etc/rancher/k3s/registries.yaml file the last time I was in it. I had an errant space. The next time the host restarted I had a strange back/forth success from kubectl get pods. Sometimes I saw content, other times I saw that “…6443 …did you specify …” error. Nothing barked that the registries.yaml file was corrupt. It may be in a log somewhere. I think the YAML was just ignored, hence docker registry failures for any images it did not have.

I repaired the YAML file, restarted services, and that error went away. So don’t discount docker nor registry errors.

Thanks for the solution!! just swaoff -a worked and took few seconds to get the pods listed.

Worked this solution for me ! Thanks a lot!

I can confirm that, for me at least, this is a configuration issue. I’m on a mac managing a remote cluster, and I used export KUBECONFIG=~/my-config-file.yml. When you restart, this variable is cleared and has to be added again through the above command (happened as well for me on Linux). There are several ways to add it permanently, but I chose to merge it with my default config file. I also wrote a quick Bash script that can do this on startup as well as loop through and load multiple config files in the case where it’s easier to keep them as separate files. In any case, if you’re having the same issue, double checking that it’s not a config issue (echo $KUBECONFIG) is the easiest first step before diving into the more involved solutions.

It’s also worth mentioning that it’s convenient to keep an environment variable file that you can load. Not only does $KUBECONFIG get unset, other export variables you may have set like $API_TOKENS will also be unset and be a bit of a head scratcher.

I was getting this error:
The connection to the server 10.86.173.144:6443 was refused - did you specify the right host or port?

Starting containerd, did the magic for me.
$sudo systemctl start containerd

To make it permanent, with server restart i enabled.
$sudo systemctl enable containerd.service

After reboot it is working as well now. I think the issue is related to containerd being inactive in my case.

Thank you for this post. I have been knocking my head about this on Ubuntu 20:04 bare metal cluster and none of the suggestions were an answer.

“The connection to the server localhost:8080 was refused - did you specify the right host or port?” kept coming back no matter what kubectl command I issued. This cluster worked fine until I decided to shutdown and restart any of the nodes. I now realize that as you stated, the swap partition remains there for some reason, maybe because all the guides are giving the exact same mistaken commands to turn off swap.

However I already deleted my cluster and now recreating another and I will watch to see if this fix works if it occurs again. All of the posts were helpful in helping to isolate my issue down to the swap issue so thank you all.

This one worked for me. Thanks a lot

I restarted kubelet and it worked

Thank you HCR, it worked for me now I get my nodes after starting containerd
$sudo systemctl start containerd
and enabling containerd.service
$sudo systemctl enable containerd.service
next I did restart
$sudo systemctl restart containerd

updating again
Again same connection refuse error is generating

This solution worked for me in 2022. Thanks!

The same thing happened to me. The cluster has been running a few weeks and probably some logs filled the entire disk.
eivind@k8s-master:~$ df -h --total
Filesystem Size Used Avail Use% Mounted on
tmpfs 796M 8.7M 787M 2% /run
/dev/sda2 20G 20G 0 100% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 796M 4.0K 796M 1% /run/user/1000
total 25G 20G 5.5G 79% -

My /dev/sda2 has no free space. I don’t know why and I don’t have a solution yet.

Hi All, i am also facing the below error during the below command :
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
he connection to the server 172.31.25.205:6443 was refused - did you specify the right host or port?

Solution: Kindly change the security group and define the first port as 6443 or allow all traffic.