The connection to the server localhost:8080 was refused - did you specify the right host or port?

Hello team,

I have installed kubernetes in centos.
After installation when i was checking the version with below command.

kubectl version -o json

I am getting error below along with version information.

{
  "clientVersion": {
    "major": "1",
    "minor": "11",
    "gitVersion": "v1.11.2",
    "gitCommit": "bb9ffb1654d4a729bb4cec18ff088eacc153c239",
    "gitTreeState": "clean",
    "buildDate": "2018-08-07T23:17:28Z",
    "goVersion": "go1.10.3",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}

The connection to the server localhost:8080 was refused - did you specify the right host or port?

Can you please check and advise.

Thanks,
Hemanth.

2 Likes

Hi.
Check that the api server is actually running and hasn’t crashed:

docker ps | grep kube-apiserver

But the most likely problem and the one that usually gets me is that you don’t have a .kube directory with the right config in it. Try this:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
12 Likes

I have checked with below command

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

it is throwing below error.

cp: cannot stat ‘/etc/kubernetes/admin.conf’: No such file or directory

and i have check below, i could see it still running.

kubectl cluster-info
Kubernetes master is running at http://localhost:8080

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?

but when i use curl localhost:8080 it is throwing below error.

curl localhost:8080
curl: (7) Failed connect to localhost:8080; Connection refused

I’m encountering the same issue.

user@sqa02:/etc/kubernetes$ mkdir -p $HOME/.kube
user@sqa02:/etc/kubernetes$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: cannot stat '/etc/kubernetes/admin.conf': No such file or directory
root@sqa03:~# kubectl cluster-info
Kubernetes master is running at http://localhost:8080

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
root@sqa03:~# curl localhost:8080
curl: (7) Failed to connect to localhost port 8080: Connection refused

To reproduce, follow the instructions provided on this techrepublic article.

Hi there ,
any update on this ? I have the same problem now for this when i install minikube on Ubuntu 18.04 .

BR

That error should only come up if you have no contexts configured in your client. If you run kubectl config view and you get something like this:

$ kubectl config view
apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []

Then no contexts are configured.

4 Likes

I have the same error, I tried running
kubectl config view
and I got the following:
apiVersion: v1
clusters:

  • cluster:
    insecure-skip-tls-verify: true
    server: https://localhost:6443
    name: docker-for-desktop-cluster
    contexts:
  • context:
    cluster: docker-for-desktop-cluster
    user: docker-for-desktop
    name: docker-for-desktop
    current-context: docker-for-desktop
    kind: Config
    preferences: {}
    users:
  • name: docker-for-desktop
    user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

I tried running the command you suggested. Do you have any further suggestions?

This seems like another way to setup the cluster that said on the post (https://www.techrepublic.com/article/how-to-quickly-install-kubernetes-on-ubuntu/). Or is it using the same steps?

Getting kubectl to run really depends on how you installed it. Basically, if you install and have a proper config file, it should always work. So, either an old file from a previous installation is there or something silly like that (although usually difficult to spot).

Also, make sure the commands don’t fail (some on the post pasted that the step to copy the kubectl config failed). That is the way to authorize to the cluster, so it won’t never work if that step doesn’t work :slight_smile:

If I were you, I’d try removing everything from a previous run and starting from scratch and making sure nothing fails. If it does fail, try to fix that instead of continuing with the next steps. And if you can’t fix it, please report back with the steps you did, why it failed (the error) and what you tried and didn’t work.

This way it will be easier to solve :slight_smile:

This really helped. Thank you. I am running K8s 1.13.3 installed via Kubespray 2.7

Run these commands to fix it.

sudo cp /etc/kubernetes/admin.conf HOME/ sudo chown (id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

2 Likes

Below commands solved the problem for me.
mkdir ~/.kube
vim ~/.kube/config
and copy the config file from mster node to nodes config file

2 Likes

Bom dia

Im beggining my studies in Kubernets, follow the tutorial (Install and Set Up kubectl - Kubernetes) and when type “kubectl cluster-info” I receive the message “To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.
The connection to the server localhost:8080 was refused - did you specify the right host or port?”.
When I run kubectl cluster-info dump receive the message "
The connection to the server localhost:8080 was refused - did you specify the right host or port?"
The documentation I found tells to search the file admin.conf at folder /etc/kubernets, but when I do it I didn’t find the folder.
What can I do?

I have currently started working on this as well and I am running into the same brickwall.
I am running two nodes, one master and the other host.
Both are VMs running CentOS in Oracle Virtual Manager.

I got docker installed to the host and kubectl installed to master.
I can ssh from master to host and visa-verse. I receive ping replies as well. But cannot telnet into any of the machines from my physical Windows machine.

When it comes to “kubectl get nodes” I receive the error: The connection to the server x.x.x.x:6443 was refused - did you specify the right host or port?
~]$ kubectl config view
apiVersion: v1
clusters:

  • cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://x.x.x.x:6443
    name: local
    contexts:
  • context:
    cluster: local
    user: kube-admin-local
    name: local
    current-context: local
    kind: Config
    preferences: {}
    users:
  • name: kube-admin-local
    user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

I have added the IP address+port to the iptables, tried again. stop the firewalld but still got the same error.

You missed initialising

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

Make sure you are executing the command from the right user.

3 Likes

Yes I had to be sure I used the “right user”. For me that was a user that had the admin.conf file copied to ~/.kube/config.

You can see the difference between the working “pi” users and the “root” user who does not have the config file:

pi@node0:~ $ sudo kubectl get pods --all-namespaces
The connection to the server localhost:8080 was refused - did you specify the right host or port?
pi@node0:~ $ kubectl get pods --all-namespaces
NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE
kube-system   coredns-f9fd979d6-jjsjr         1/1     Running   0          11h
kube-system   coredns-f9fd979d6-xl5f2         1/1     Running   0          11h
kube-system   etcd-node0                      1/1     Running   0          11h
kube-system   kube-apiserver-node0            1/1     Running   0          11h
kube-system   kube-controller-manager-node0   1/1     Running   0          11h
kube-system   kube-flannel-ds-arm-4zq4g       1/1     Running   0          11h
kube-system   kube-flannel-ds-arm-dcprj       1/1     Running   0          11h
kube-system   kube-flannel-ds-arm-fwzkl       1/1     Running   0          11h
kube-system   kube-flannel-ds-arm-q8t5k       1/1     Running   0          11h
kube-system   kube-proxy-dlhc7                1/1     Running   0          11h
kube-system   kube-proxy-glh92                1/1     Running   0          11h
kube-system   kube-proxy-jh26p                1/1     Running   0          11h
kube-system   kube-proxy-qflcw                1/1     Running   0          11h
kube-system   kube-scheduler-node0            1/1     Running   0          11h
4 Likes

Thanks AqueeluddinKhaja. I dropped the sudo and it worked.

1 Like

execute command:
sudo kubeadm init

I’ve got this output:
(base) jiribenes:~$ sudo kubeadm init
[sudo] password for jiribenes:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local rurhelena1920] and IPs [10.96.0.1 172.30.147.238]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] Generating “etcd/ca” certificate and key
[certs] Generating “etcd/server” certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost rurhelena1920] and IPs [172.30.147.238 127.0.0.1 ::1]
[certs] Generating “etcd/peer” certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost rurhelena1920] and IPs [172.30.147.238 127.0.0.1 ::1]
[certs] Generating “etcd/healthcheck-client” certificate and key
[certs] Generating “apiserver-etcd-client” certificate and key
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[apiclient] All control plane components are healthy after 18.503678 seconds
[upload-config] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[kubelet] Creating a ConfigMap “kubelet-config-1.20” in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node rurhelena1920 as control-plane by adding the labels “node-role.kubernetes.io/master=’’” and “node-role.kubernetes.io/control-plane=’’ (deprecated)”
[mark-control-plane] Marking the node rurhelena1920 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: r1cctv.zgkk2aore4luh7wo
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the “cluster-info” ConfigMap in the “kube-public” namespace
[kubelet-finalize] Updating “/etc/kubernetes/kubelet.conf” to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf HOME/.kube/config sudo chown (id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
Installing Addons | Kubernetes

Then you can join any number of worker nodes by running the following on each as root:

Mostly as a comment to folk - that also installs weave-net and should not just be copied/pasted