The connection to the server localhost:8080 was refused - did you specify the right host or port?

Run these commands to fix it.

sudo cp /etc/kubernetes/admin.conf HOME/ sudo chown (id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

4 Likes

Below commands solved the problem for me.
mkdir ~/.kube
vim ~/.kube/config
and copy the config file from mster node to nodes config file

7 Likes

Bom dia

Im beggining my studies in Kubernets, follow the tutorial (Install and Set Up kubectl - Kubernetes) and when type “kubectl cluster-info” I receive the message “To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.
The connection to the server localhost:8080 was refused - did you specify the right host or port?”.
When I run kubectl cluster-info dump receive the message "
The connection to the server localhost:8080 was refused - did you specify the right host or port?"
The documentation I found tells to search the file admin.conf at folder /etc/kubernets, but when I do it I didn’t find the folder.
What can I do?

I have currently started working on this as well and I am running into the same brickwall.
I am running two nodes, one master and the other host.
Both are VMs running CentOS in Oracle Virtual Manager.

I got docker installed to the host and kubectl installed to master.
I can ssh from master to host and visa-verse. I receive ping replies as well. But cannot telnet into any of the machines from my physical Windows machine.

When it comes to “kubectl get nodes” I receive the error: The connection to the server x.x.x.x:6443 was refused - did you specify the right host or port?
~]$ kubectl config view
apiVersion: v1
clusters:

  • cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://x.x.x.x:6443
    name: local
    contexts:
  • context:
    cluster: local
    user: kube-admin-local
    name: local
    current-context: local
    kind: Config
    preferences: {}
    users:
  • name: kube-admin-local
    user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

I have added the IP address+port to the iptables, tried again. stop the firewalld but still got the same error.

You missed initialising

sudo kubeadm init --pod-network-cidr=10.244.0.0/16
1 Like

Make sure you are executing the command from the right user.

3 Likes

Yes I had to be sure I used the “right user”. For me that was a user that had the admin.conf file copied to ~/.kube/config.

You can see the difference between the working “pi” users and the “root” user who does not have the config file:

pi@node0:~ $ sudo kubectl get pods --all-namespaces
The connection to the server localhost:8080 was refused - did you specify the right host or port?
pi@node0:~ $ kubectl get pods --all-namespaces
NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE
kube-system   coredns-f9fd979d6-jjsjr         1/1     Running   0          11h
kube-system   coredns-f9fd979d6-xl5f2         1/1     Running   0          11h
kube-system   etcd-node0                      1/1     Running   0          11h
kube-system   kube-apiserver-node0            1/1     Running   0          11h
kube-system   kube-controller-manager-node0   1/1     Running   0          11h
kube-system   kube-flannel-ds-arm-4zq4g       1/1     Running   0          11h
kube-system   kube-flannel-ds-arm-dcprj       1/1     Running   0          11h
kube-system   kube-flannel-ds-arm-fwzkl       1/1     Running   0          11h
kube-system   kube-flannel-ds-arm-q8t5k       1/1     Running   0          11h
kube-system   kube-proxy-dlhc7                1/1     Running   0          11h
kube-system   kube-proxy-glh92                1/1     Running   0          11h
kube-system   kube-proxy-jh26p                1/1     Running   0          11h
kube-system   kube-proxy-qflcw                1/1     Running   0          11h
kube-system   kube-scheduler-node0            1/1     Running   0          11h
6 Likes

Thanks AqueeluddinKhaja. I dropped the sudo and it worked.

2 Likes

execute command:
sudo kubeadm init

I’ve got this output:
(base) jiribenes:~$ sudo kubeadm init
[sudo] password for jiribenes:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local rurhelena1920] and IPs [10.96.0.1 172.30.147.238]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] Generating “etcd/ca” certificate and key
[certs] Generating “etcd/server” certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost rurhelena1920] and IPs [172.30.147.238 127.0.0.1 ::1]
[certs] Generating “etcd/peer” certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost rurhelena1920] and IPs [172.30.147.238 127.0.0.1 ::1]
[certs] Generating “etcd/healthcheck-client” certificate and key
[certs] Generating “apiserver-etcd-client” certificate and key
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[apiclient] All control plane components are healthy after 18.503678 seconds
[upload-config] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[kubelet] Creating a ConfigMap “kubelet-config-1.20” in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node rurhelena1920 as control-plane by adding the labels “node-role.kubernetes.io/master=’’” and “node-role.kubernetes.io/control-plane=’’ (deprecated)”
[mark-control-plane] Marking the node rurhelena1920 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: r1cctv.zgkk2aore4luh7wo
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the “cluster-info” ConfigMap in the “kube-public” namespace
[kubelet-finalize] Updating “/etc/kubernetes/kubelet.conf” to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf HOME/.kube/config sudo chown (id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
Installing Addons | Kubernetes

Then you can join any number of worker nodes by running the following on each as root:

Mostly as a comment to folk - that also installs weave-net and should not just be copied/pasted

If it fails even with the root credential, it could probably be the issue of the downloaded binary file. Check it out the link below to reinstall the kubectl:

Using the right user fixed the issue for me. I set up the cluster using kops method in AWS.
I logged into the cluster with ‘ubuntu’ user by the command ssh -i ~/.ssh/id_rsa ubuntu@api.demo.k8s.xyz.net
When i switched to root account, kubectl get nodes is not working. It is working only for ubuntu user

yes this is the exact solution to fix this issue. If you are setting environment for a user then we need to execute kubectl cmd with same user login but if you try to execute in other user or in root then it will throw this error.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf HOME/.kube/config sudo chown (id -u):$(id -g) $HOME/.kube/config

run this commands it will resolve

1 Like

It worked for me, too. All I have to do is use the same user (non-root) used to issue the command: “minikube start --vm-driver=“docker””

1 Like

Resolved:

I was able to replicate the same error, and get it fixed using Hawaiideveloper’s solution

[solved] The connection to the server localhost:8080 was refused - did you specify the right host or port? [Ubuntu VM] · Issue #15 · Hawaiideveloper/Infastructure-as-Code-Sample_Env · GitHub

I was also suffering from issue “kubectl the connection to the server was refused 8080”
.
:point_down::point_down::point_down::point_down:

The solution for this issue is to use appropriate user to use kubectl. Don’t use sudo. Using sudo will give error. And only user on which master node is created is only able to use kubectl.

It doesn’t have to be on the control plane node. They just need their kubeconfig configured.

@mrbobbytables can you please suggest what needs to be done to setup contexts, i am getting simillar output which you suggest indicates No context configured

First you have to check wheather you are running the commands on master or workers

then you can try above suggessions