The connection to the server <host>:6443 was refused - did you specify the right host or port?

use the below, config file need to export

root@ubuntu:~# vim /etc/kubernetes/admin.conf
root@ubuntu:~# vim $HOME/.kube/config
root@ubuntu:~# export KUBECONFIG=/etc/kubernetes/admin.conf
root@ubuntu:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu Ready master 7d21h v1.16.2

Thank you for the reply.But still same issue.

Make a script auto start master node after reboot :
vi /etc/rc.local

Copy and paste:
#!/bin/bash
swapoff -a
systemctl start kubelet
docker start (docker ps -a -q)
docker start (docker ps -a -q)

Change mode:
chmod +x /etc/rc.local

image

Hope it can help you!

export KUBECONFIG=/etc/kubernetes/kubelet.conf

i had the same issue. I somehow resolved by enable the k8s ports on firewall.

kubeadm reset
systemctl enable firewalld|
systemctl start firewalld|
firewall-cmd --permanent --add-port=6443/tcp|
firewall-cmd --permanent --add-port=2379-2380/tcp|
firewall-cmd --permanent --add-port=10250-10255/tcp|
firewall-cmd –reload

hopefully, it can help you.

  1. master
    kubeadm reset
    kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.211.40 --kubernetes-version=v1.18.0

        kubeadm join 192.168.211.40:6443 --token s7apx1.mlxn2jkid6n99fr0 \
            --discovery-token-ca-cert-hash sha256:2fa9da39110d02efaf4f8781aa50dd25cce9be524618dc7ab91a53e81c5c22f8 
    
        $ mkdir -p $HOME/.kube
        $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

node1

$ kubeadm reset
$ kubeadm join 192.168.211.40:6443 --token s7apx1.mlxn2jkid6n99fr0 \
    --discovery-token-ca-cert-hash sha256:2fa9da39110d02efaf4f8781aa50dd25cce9be524618dc7ab91a53e81c5c22f8 

node2

$ kubeadm reset
$ kubeadm join 192.168.211.40:6443 --token s7apx1.mlxn2jkid6n99fr0 \
    --discovery-token-ca-cert-hash sha256:2fa9da39110d02efaf4f8781aa50dd25cce9be524618dc7ab91a53e81c5c22f8 

master

$ kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   5m18s   v1.18.6
node1    Ready    <none>   81s     v1.18.6
node2    Ready    <none>   43s     v1.18.6
$ scp /root/.kube/config root@192.168.211.41:/root/.kube/config
$ scp /root/.kube/config root@192.168.211.42:/root/.kube/config

detail more:[https://ghostwritten.blog.csdn.net/article/details/111186072](https://The connection to the server ip:6443 was refused - did you specify the right host or port)

Thanks Sage. This is wonderful. Restarting kubelet.service fixed my issue.

You just need to kill kubelet service and restart again. pods and container will be running as well as before reboot.

pkill kubelet

and

systemctl restart kubelet

good luck

I inadvertently corrupted my /etc/rancher/k3s/registries.yaml file the last time I was in it. I had an errant space. The next time the host restarted I had a strange back/forth success from kubectl get pods. Sometimes I saw content, other times I saw that “…6443 …did you specify …” error. Nothing barked that the registries.yaml file was corrupt. It may be in a log somewhere. I think the YAML was just ignored, hence docker registry failures for any images it did not have.

I repaired the YAML file, restarted services, and that error went away. So don’t discount docker nor registry errors.

Thanks for the solution!! just swaoff -a worked and took few seconds to get the pods listed.

Worked this solution for me ! Thanks a lot!