The connection to the server localhost:8080 was refused - did you specify the right host or port?

If it fails even with the root credential, it could probably be the issue of the downloaded binary file. Check it out the link below to reinstall the kubectl:

Using the right user fixed the issue for me. I set up the cluster using kops method in AWS.
I logged into the cluster with ‘ubuntu’ user by the command ssh -i ~/.ssh/id_rsa ubuntu@api.demo.k8s.xyz.net
When i switched to root account, kubectl get nodes is not working. It is working only for ubuntu user

yes this is the exact solution to fix this issue. If you are setting environment for a user then we need to execute kubectl cmd with same user login but if you try to execute in other user or in root then it will throw this error.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf HOME/.kube/config sudo chown (id -u):$(id -g) $HOME/.kube/config

run this commands it will resolve

It worked for me, too. All I have to do is use the same user (non-root) used to issue the command: “minikube start --vm-driver=“docker””

Resolved:

I was able to replicate the same error, and get it fixed using Hawaiideveloper’s solution

“[solved] The connection to the server localhost:8080 was refused - did you specify the right host or port? [Ubuntu VM] · Issue #15 · Hawaiideveloper/Infastructure-as-Code-Sample_Env · GitHub”

I was also suffering from issue “kubectl the connection to the server was refused 8080”
.
:point_down::point_down::point_down::point_down:

The solution for this issue is to use appropriate user to use kubectl. Don’t use sudo. Using sudo will give error. And only user on which master node is created is only able to use kubectl.

It doesn’t have to be on the control plane node. They just need their kubeconfig configured.

@mrbobbytables can you please suggest what needs to be done to setup contexts, i am getting simillar output which you suggest indicates No context configured

First you have to check wheather you are running the commands on master or workers

then you can try above suggessions

Hey, @Sankalp_Suryawanshi
First check available contexts, through

kubectl config get-contexts

If it’s there then you can use that context.
Go through this link, commands and examples are available.

Use without sudo

1 Like

@Sankalp_Suryawanshi Please use without root or sudo 
 it will work

2 Likes

Hi Man thanks for the tips ,It helped me take off the frustration.

This worked for me. Thanks for the help.

This one worked for me

it helped me a lot :slight_smile:

This exact error was thrown from node, where i was supposed to create static pod there;
Based on lead by “elp” scped whole .kube from controlnode to worker. Basically there was no .kube present in node01.
scp -r .kube node01:/root did teh trick for me.
Thanks all :slight_smile:

the real port can be different

8080

You most probably followed the these 4 lines:
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ export KUBECONFIG=/etc/kubernetes/admin.conf

-------- root cause:
$ export KUBECONFIG=/etc/kubernetes/admin.conf
the owner of /etc/kubernetes/admin.conf is root.

------- workaround:
$ export KUBECONFIG=$HOME/.kube/config

1 Like