Connection to the server localhost:8080 was refused

Hello,

Cluster information:

Kubernetes version:

root@srv-lx-k8s-master-01:/home/dadiaspe/.kube# kubeadm version --output json
{
  "clientVersion": {
    "major": "1",
    "minor": "19",
    "gitVersion": "v1.19.2",
    "gitCommit": "f5743093fd1c663cb0cbc89748f730662345d44d",
    "gitTreeState": "clean",
    "buildDate": "2020-09-16T13:38:53Z",
    "goVersion": "go1.15",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}

Cloud being used: bare-metal

Installation method:
apt-get install -y kubelet kubeadm kubectl

Host OS: Ubuntu 20.04.1 LTS
CNI and version:
CRI and version:

As part of the install process (I followed this guide https://www.nakivo.com/blog/install-kubernetes-ubuntu/), I have to run this command, but getting the following error:

root@srv-lx-k8s-master-01:/home/dadiaspe/.kube# kubectl apply -f ./kube-flannel.yml
The connection to the server localhost:8080 was refused - did you specify the right host or port?

I’ve seen this error all over Internet in different forums, including this one, and the recommendation is to run few commands, but i’ve done that already (as a regular user, not as root):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Here I confirm I’ve done this:

dadiaspe@srv-lx-k8s-master-01:~$ ls -ltr $HOME/.kube
total 8
-rw------- 1 dadiaspe dadiaspe 5569 Oct 11 08:52 config

What am I missing?

Thanks so much

Did kubeadm complete the install?
Can you see the pods running if you do docker ps -a?

Hi @mrbobbytables,

Here’s the output:

root@srv-lx-k8s-master-01:/home/dadiaspe# docker ps -a
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
e1048b6e5915        d373dd5a8593           "/usr/local/bin/kube…"   4 hours ago         Up 4 hours                              k8s_kube-proxy_kube-proxy-6mkdq_kube-system_e4ebd7f3-cc3e-49e9-88ae-df642afe4a0c_0
488f7364ea96        k8s.gcr.io/pause:3.2   "/pause"                 4 hours ago         Up 4 hours                              k8s_POD_kube-proxy-6mkdq_kube-system_e4ebd7f3-cc3e-49e9-88ae-df642afe4a0c_0
b8f786757a21        8603821e1a7a           "kube-controller-man…"   4 hours ago         Up 4 hours                              k8s_kube-controller-manager_kube-controller-manager-srv-lx-k8s-master-01_kube-system_aa9d084ad37e56894bc5ec833cf20941_0
606523958e3f        2f32d66b884f           "kube-scheduler --au…"   4 hours ago         Up 4 hours                              k8s_kube-scheduler_kube-scheduler-srv-lx-k8s-master-01_kube-system_f543c94683059cb32a4441e29fbdb238_0
dda37a98d099        607331163122           "kube-apiserver --ad…"   4 hours ago         Up 4 hours                              k8s_kube-apiserver_kube-apiserver-srv-lx-k8s-master-01_kube-system_f7baba4d31ed71d11e3878067557ca0f_0
0df28efd1010        0369cf4303ff           "etcd --advertise-cl…"   4 hours ago         Up 4 hours                              k8s_etcd_etcd-srv-lx-k8s-master-01_kube-system_f4445f33bd5a7da47b2e9cc92268c2c5_0
39e68b12fbfc        k8s.gcr.io/pause:3.2   "/pause"                 4 hours ago         Up 4 hours                              k8s_POD_kube-scheduler-srv-lx-k8s-master-01_kube-system_f543c94683059cb32a4441e29fbdb238_0
5cb554af075e        k8s.gcr.io/pause:3.2   "/pause"                 4 hours ago         Up 4 hours                              k8s_POD_kube-controller-manager-srv-lx-k8s-master-01_kube-system_aa9d084ad37e56894bc5ec833cf20941_0
43675b4510d2        k8s.gcr.io/pause:3.2   "/pause"                 4 hours ago         Up 4 hours                              k8s_POD_kube-apiserver-srv-lx-k8s-master-01_kube-system_f7baba4d31ed71d11e3878067557ca0f_0
33a79a49c74e        k8s.gcr.io/pause:3.2   "/pause"                 4 hours ago         Up 4 hours                              k8s_POD_etcd-srv-lx-k8s-master-01_kube-system_f4445f33bd5a7da47b2e9cc92268c2c5_0
root@srv-lx-k8s-master-01:/home/dadiaspe#  

I also ran the the kubeadm init without any issues (kubeadm init --pod-network-cidr=10.244.0.0/16), and I got the kubeadm join....

Not sure what I’m missing

Well an update here, i finally figured it out.

From the error message, I noticed the port 8080, when in fact it should be 6443 as shown in the config file:

Error Message:

root@srv-lx-k8s-master-01:/home/dadiaspe/.kube# kubectl apply -f ./kube-flannel.yml
The connection to the server **localhost:8080** was refused - did you specify the right host or port?

Config File:

root@srv-lx-k8s-master-01:/home/dadiaspe/.kube# cat $HOME/.kube/config 
.
.
    server: https://192.168.1.119:6443
  name: kubernetes
.
.

Problem was that I ran the following commands as with my username (dadiaspe) in my linux box, and not as root (as shown in the guide):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


dadiaspe@srv-lx-k8s-master-01:~$ ls -ltr $HOME/.kube
total 8
-rw------- 1 **dadiaspe dadiaspe** 5569 Oct 11 08:52 config

So when running the following as root, it wasn’t actually pointing to the $HOME folder of my user, but the one of root, and was getting these error (cause config file didn’t exists here).

So I ended up doing this, as root:

cp /home/dadiaspe/.kube/config $HOME/.kube/config

root@srv-lx-k8s-master-01:/home/dadiaspe/.kube# ls -ltr $HOME/.kube/
total 12
-rw------- 1 root root 5569 Oct 12 15:48 config

And then it worked properly:

root@srv-lx-k8s-master-01:/home/dadiaspe/.kube# kubectl apply -f ./kube-flannel.yml
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/flannel unchanged
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
unable to recognize "./kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "./kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "./kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "./kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "./kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
root@srv-lx-k8s-master-01:/home/dadiaspe/.kube#