The connection to the server <host>:6443 was refused - did you specify the right host or port?

Hello guys. Everything was working on a kubernetes cluster with a master and three nodes. After a restart in all machines I started receiving this error when trying to run any kubectl command.

#> kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server 135.122.6.50:6443 was refused - did you specify the right host or port?

#> kubectl get nodes
The connection to the server 135.122.6.50:6443 was refused - did you specify the right host or port?

This is what I am seeing in running docker logs in the kube-apiserver docker process:

E0530 12:47:01.060000 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://135.122.6.50:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 135.122.6.50:6443: getsockopt: connection refused

#> kubectl cluster-info
Kubernetes master is running at https://135.122.6.50:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 135.122.6.50:6443 was refused - did you specify the right host or port? 

Does anyone know what should I need to do to fix that?

Thanks a lot for any suggestions

2 Likes

Hi, Can you run the following command and update the question with result kubectl cluster-info ?

1 Like

Thanks suvi29 for you quick reply. I have updated the question with the command you have asked me to run.

Thanks

It seems my issue is related to this one:

Does anyone have a tip?

Hi I have the same problem with you,
when I restart the ubuntu,and type kubectl get nodes
It always show

change@change-VirtualBox:~$ kubectl get nodes
The connection to the server 10.0.2.15:6443 was refused - did you specify the right host or port?

and I found how to solve this question.

  1. sudo -i
  2. swapoff -a
  3. exit
  4. strace -eopenat kubectl version

and you can type kubectl get nodes again.

18 Likes

Wow, this is amazing, it worked.
Sometimes this is like black magic. I don´t get it, what has the swap to do with connecting to a server?
If you know why it worked, I would really appreciate it. I want to try to understand bit by bit why and how the things work or fail. Anyway you made my day with this solution :slight_smile:

4 Likes

Thank you changec, it worked :slight_smile:

I’m facing the same problems mentioned here. Running 3 node cluster on Digital Ocean. Running CentOS 7.5 droplet as the base OS, and installed the latest Kubernetes (1.12.0)

Everything was working fine until I rebooted the server/droplet. Then the ‘connection error’ mesg appeared. Tried the ‘swapoff -a’ and the ‘strace’ command. Didn’t work for me.

Any help/suggestion is very much appreciated.
Thanks!

1 Like

Additional info - after the reboot, only the docker container “kube-scheduler” is running.
The other docker containers have stopped, i.e.

  • kube-apiserver
  • kube-controller
  • kube-proxy

Any idea why this is so? How to automatically re-enable/re-start these containers upon start-up ?

@changec :; its worked perfectly…even with out strace command…its only required swapoff command…
even we have to do in all the nodes if have a cluster set up,if not those nodes will show not ready state…

     And here my point is why this kubernets require to do disable swap...is kubernets will not with in swap enabled state?...i hope no..i tried but i got an error at the time of kubeadm init...after swapoff only it worked...

 if you have any clear things about this please share..thanks ...

hi,maybe you can check this topic

1 Like

Hi,
Thanks for the replay and suggested topic clears my doubt.

Hi, I I’m facing the same problem and I’m pretty sure @changec’s way does not work (there is no swap partition
It there another solution?

Hi, I have the same problem and listed above solution did not help me. My system runs on VMware Workstation 15 Player:

denis@ubuntu:~/dev$ lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 18.04.1 LTS
Release:	18.04
Codename:	bionic

When I try to execute kubectl command:

denis@ubuntu:~/dev$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

The logs of restarted kubelet:

янв 30 15:58:45 ubuntu systemd[1]: Started kubelet: The Kubernetes Node Agent.
янв 30 15:58:45 ubuntu kubelet[7651]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
янв 30 15:58:45 ubuntu kubelet[7651]: Flag --allow-privileged has been deprecated, will be removed in a future version
янв 30 15:58:45 ubuntu kubelet[7651]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
янв 30 15:58:45 ubuntu kubelet[7651]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
янв 30 15:58:45 ubuntu kubelet[7651]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
янв 30 15:58:45 ubuntu kubelet[7651]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
янв 30 15:58:45 ubuntu kubelet[7651]: Flag --cadvisor-port has been deprecated, The default will change to 0 (disabled) in 1.12, and the cadvisor port will be removed entirely in 1.13
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.666637    7651 feature_gate.go:226] feature gates: &{{} map[]}
янв 30 15:58:45 ubuntu kubelet[7651]: W0130 15:58:45.676051    7651 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
янв 30 15:58:45 ubuntu kubelet[7651]: W0130 15:58:45.681294    7651 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.681495    7651 server.go:376] Version: v1.10.5
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.681615    7651 feature_gate.go:226] feature gates: &{{} map[]}
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.681772    7651 plugins.go:89] No cloud provider specified.
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.683749    7651 certificate_store.go:117] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.731337    7651 server.go:614] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.731677    7651 container_manager_linux.go:242] container manager verified user specified cgroup-root exists: /
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.731697    7651 container_manager_linux.go:247] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true}
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.731866    7651 container_manager_linux.go:266] Creating device plugin manager: true
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.731907    7651 state_mem.go:36] [cpumanager] initializing new in-memory state store
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.731963    7651 state_mem.go:84] [cpumanager] updated default cpuset: ""
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.731990    7651 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.732149    7651 kubelet.go:273] Adding pod path: /etc/kubernetes/manifests
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.732175    7651 kubelet.go:298] Watching apiserver
янв 30 15:58:45 ubuntu kubelet[7651]: E0130 15:58:45.739359    7651 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://192.168.10.68:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.10.68:6443: getsockopt: connection refused
янв 30 15:58:45 ubuntu kubelet[7651]: E0130 15:58:45.739438    7651 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.10.68:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dubuntu&limit=500&resourceVersion=0: dial tcp 192.168.10.68:6443: getsockopt: connection refused
янв 30 15:58:45 ubuntu kubelet[7651]: E0130 15:58:45.739668    7651 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://192.168.10.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dubuntu&limit=500&resourceVersion=0: dial tcp 192.168.10.68:6443: getsockopt: connection refused
янв 30 15:58:45 ubuntu kubelet[7651]: W0130 15:58:45.740027    7651 kubelet_network.go:139] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.740140    7651 kubelet.go:558] Hairpin mode set to "hairpin-veth"
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.741209    7651 client.go:75] Connecting to docker on unix:///var/run/docker.sock
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.741295    7651 client.go:104] Start docker client with request timeout=2m0s
янв 30 15:58:45 ubuntu kubelet[7651]: W0130 15:58:45.742361    7651 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
янв 30 15:58:45 ubuntu kubelet[7651]: W0130 15:58:45.744173    7651 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.745059    7651 docker_service.go:244] Docker cri networking managed by kubernetes.io/no-op
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.756618    7651 docker_service.go:249] Docker Info: &{ID:PCAF:JPRA:TPT2:IMHM:QVJV:F4HB:MAR4:7LQ4:556Z:I4SC:3CHJ:CR5G Containers:49 ContainersRunning:0 ContainersPaused:0 ContainersStopped:49 Images:43 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:34 SystemTime:2019-01-30T15:58:45.751810456+03:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.15.0-44-generic OperatingSystem:Ubuntu 18.04.1 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc42055f2d0 NCPU:2 MemTotal:4112048128 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu Labels:[] ExperimentalBuild:false ServerVersion:18.03.1-ce ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:docker-runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:773c489c9c1b21a6d78b5c538cd395416ec50f88 Expected:773c489c9c1b21a6d78b5c538cd395416ec50f88} RuncCommit:{ID:4fc53a81fb7c994640722ac585fa9ca548971871 Expected:4fc53a81fb7c994640722ac585fa9ca548971871} InitCommit:{ID:949e6fa Expected:949e6fa} SecurityOptions:[name=apparmor name=seccomp,profile=default]}
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.757187    7651 docker_service.go:262] Setting cgroupDriver to cgroupfs
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.780788    7651 remote_runtime.go:43] Connecting to runtime service unix:///var/run/dockershim.sock
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.781780    7651 kuberuntime_manager.go:186] Container runtime docker initialized, version: 18.03.1-ce, apiVersion: 1.37.0
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.782118    7651 csi_plugin.go:63] kubernetes.io/csi: plugin initializing...
янв 30 15:58:45 ubuntu kubelet[7651]: E0130 15:58:45.785024    7651 kubelet.go:1282] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.785473    7651 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.785497    7651 status_manager.go:140] Starting to sync pod status with apiserver
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.785505    7651 kubelet.go:1782] Starting kubelet main sync loop.
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.785513    7651 kubelet.go:1799] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.785751    7651 server.go:129] Starting to listen on 0.0.0.0:10250
янв 30 15:58:45 ubuntu kubelet[7651]: E0130 15:58:45.785898    7651 event.go:209] Unable to write event: 'Post https://192.168.10.68:6443/api/v1/namespaces/default/events: dial tcp 192.168.10.68:6443: getsockopt: connection refused' (may retry after sleeping)
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.786202    7651 server.go:299] Adding debug handlers to kubelet server.
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.787063    7651 volume_manager.go:247] Starting Kubelet Volume Manager
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.787905    7651 server.go:945] Started kubelet
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.788048    7651 desired_state_of_world_populator.go:129] Desired state populator starts to run
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.886497    7651 kubelet.go:1799] skipping pod synchronization - [container runtime is down]
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.887527    7651 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
янв 30 15:58:45 ubuntu kubelet[7651]: I0130 15:58:45.889846    7651 kubelet_node_status.go:82] Attempting to register node ubuntu
янв 30 15:58:45 ubuntu kubelet[7651]: E0130 15:58:45.890164    7651 kubelet_node_status.go:106] Unable to register node "ubuntu" with API server: Post https://192.168.10.68:6443/api/v1/nodes: dial tcp 192.168.10.68:6443: getsockopt: connection refused

When I restart kubelet I observe that “k8s” containers start but after several seconds automatically stop.

denis@ubuntu:~/dev$ docker container ls
CONTAINER ID        IMAGE                             COMMAND                  CREATED             STATUS              PORTS               NAMES
1d0275629b45        4fb6852fef47                      "kube-apiserver --re…"   18 seconds ago      Up 18 seconds                           k8s_kube-apiserver_kube-apiserver-ubuntu_kube-system_54717c7acb06777aaa0da90e6b43d518_1
1a19e90b1ff0        k8s.gcr.io/etcd-amd64             "etcd --advertise-cl…"   21 seconds ago      Up 20 seconds                           k8s_etcd_etcd-ubuntu_kube-system_e7aaf2590e8bec8679770ef31422f566_0
7b3c17234b54        k8s.gcr.io/kube-scheduler-amd64   "kube-scheduler --ad…"   37 seconds ago      Up 36 seconds                           k8s_kube-scheduler_kube-scheduler-ubuntu_kube-system_ab7798e80dac8c9d88788f8e132924b1_0
805ee8bf5e38        k8s.gcr.io/pause-amd64:3.1        "/pause"                 41 seconds ago      Up 39 seconds                           k8s_POD_kube-apiserver-ubuntu_kube-system_54717c7acb06777aaa0da90e6b43d518_0
a6daeb3ba870        k8s.gcr.io/pause-amd64:3.1        "/pause"                 41 seconds ago      Up 39 seconds                           k8s_POD_etcd-ubuntu_kube-system_e7aaf2590e8bec8679770ef31422f566_0
0762c7cfc6f6        k8s.gcr.io/pause-amd64:3.1        "/pause"                 41 seconds ago      Up 39 seconds                           k8s_POD_kube-scheduler-ubuntu_kube-system_ab7798e80dac8c9d88788f8e132924b1_0

The kubeadm version is:

denis@ubuntu:~/dev$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.5", GitCommit:"32ac1c9073b132b8ba18aa830f46b77dcceb0723", GitTreeState:"clean", BuildDate:"2018-06-21T11:34:22Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
denis@ubuntu:~/dev$ 

The kubectl version:

denis@ubuntu:~/dev$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.5", GitCommit:"32ac1c9073b132b8ba18aa830f46b77dcceb0723", GitTreeState:"clean", BuildDate:"2018-06-21T11:46:00Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

The kubelet version:

denis@ubuntu:~/dev$ kubelet version
I0130 17:45:00.046730   12766 feature_gate.go:226] feature gates: &{{} map[]}
F0130 17:45:00.046938   12766 server.go:218] error reading /var/lib/kubelet/pki/kubelet.key, certificate and key must be supplied as a pair

The docker version is:

denis@ubuntu:~/dev$ docker version
Client:
 Version:      18.03.1-ce
 API version:  1.37
 Go version:   go1.9.5
 Git commit:   9ee9f40
 Built:        Wed Jun 20 21:43:51 2018
 OS/Arch:      linux/amd64
 Experimental: false
 Orchestrator: swarm

Server:
 Engine:
  Version:      18.03.1-ce
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.9.5
  Git commit:   9ee9f40
  Built:        Wed Jun 20 21:42:00 2018
  OS/Arch:      linux/amd64
  Experimental: false
denis@ubuntu:~/dev$ 

Please, help. I’m ready to provide any needed information.

Hello, after increasing disc space the problem was solved. Unfortunately, in the logs I did not see the records related to the problem of disk space. Five days were spent on finding out the cause, since it is not obvious.

2 Likes
kubectl get nodes

may have shown the node has diskpressure. Something i didn’t see in your post.

azzaka, thank you for answer, but “cube get nodes” was useless for my case, cause cluster did not start properly (all main modules as apiserver, etcd etc was not run). So I was getting “connection refused,” as mentioned in the first post.

I have the hit the same issue on CentOS 7 even after disabling the swap.

But i managed to run kubectl get nodes by explicitly passing --kubeconfig parameter. This is not as per the kubernetes docs

I have set KUBECONFIG file but kubectl does care about that parameter.

I had the exact same error today…and after some investigating(and trying the swapoff thing), I found that my /tmp directory was filling up and was at 100%. I grew that directory and it solved it…may look into cleaning up /tmp more frequently if it continues to be a problem.

Hi,

I had exactly the same issue, and I was able to resolve the issue using command below:

export KUBECONFIG=/etc/kubernetes/kubelet.conf

Sometimes it affects master node as well, but in that case - another issue appears, somethings like:

unable to connect to the server: x509: certificate signed by unknown authority

1 Like