root@ubuntu:~# vim /etc/kubernetes/admin.conf
root@ubuntu:~# vim $HOME/.kube/config
root@ubuntu:~# export KUBECONFIG=/etc/kubernetes/admin.conf
root@ubuntu:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu Ready master 7d21h v1.16.2
I inadvertently corrupted my /etc/rancher/k3s/registries.yaml file the last time I was in it. I had an errant space. The next time the host restarted I had a strange back/forth success from kubectl get pods. Sometimes I saw content, other times I saw that “…6443 …did you specify …” error. Nothing barked that the registries.yaml file was corrupt. It may be in a log somewhere. I think the YAML was just ignored, hence docker registry failures for any images it did not have.
I repaired the YAML file, restarted services, and that error went away. So don’t discount docker nor registry errors.
I can confirm that, for me at least, this is a configuration issue. I’m on a mac managing a remote cluster, and I used export KUBECONFIG=~/my-config-file.yml. When you restart, this variable is cleared and has to be added again through the above command (happened as well for me on Linux). There are several ways to add it permanently, but I chose to merge it with my default config file. I also wrote a quick Bash script that can do this on startup as well as loop through and load multiple config files in the case where it’s easier to keep them as separate files. In any case, if you’re having the same issue, double checking that it’s not a config issue (echo $KUBECONFIG) is the easiest first step before diving into the more involved solutions.
It’s also worth mentioning that it’s convenient to keep an environment variable file that you can load. Not only does $KUBECONFIG get unset, other export variables you may have set like $API_TOKENS will also be unset and be a bit of a head scratcher.
Thank you for this post. I have been knocking my head about this on Ubuntu 20:04 bare metal cluster and none of the suggestions were an answer.
“The connection to the server localhost:8080 was refused - did you specify the right host or port?” kept coming back no matter what kubectl command I issued. This cluster worked fine until I decided to shutdown and restart any of the nodes. I now realize that as you stated, the swap partition remains there for some reason, maybe because all the guides are giving the exact same mistaken commands to turn off swap.
However I already deleted my cluster and now recreating another and I will watch to see if this fix works if it occurs again. All of the posts were helpful in helping to isolate my issue down to the swap issue so thank you all.
Thank you HCR, it worked for me now I get my nodes after starting containerd
$sudo systemctl start containerd
and enabling containerd.service
$sudo systemctl enable containerd.service
next I did restart
$sudo systemctl restart containerd
updating again
Again same connection refuse error is generating
The same thing happened to me. The cluster has been running a few weeks and probably some logs filled the entire disk.
eivind@k8s-master:~$ df -h --total
Filesystem Size Used Avail Use% Mounted on
tmpfs 796M 8.7M 787M 2% /run
/dev/sda2 20G 20G 0 100% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 796M 4.0K 796M 1% /run/user/1000
total 25G 20G 5.5G 79% -
My /dev/sda2 has no free space. I don’t know why and I don’t have a solution yet.