I am getting this error constantly by using containerd as run-time , reloading daemon ans restarting kubelet , but it is very frustraiting , has any one faced the same issue with containerd ?
iptables may cause issue after you reboot your instance.
sudo su
iptables -P INPUT ACCEPT ALL
iptables -F
Please try this if you still have the issue. especially it said connection x.x.x.x:6xxx refused.
I use mac m1, and turn on docker, it worked
Installing kubeadm | Kubernetes you need to open 6443 port, refers to official document
You mus be set hostname because when you call âkubectlâ itâs mean http request to control plane.
May be your host or kubernetes can not understand the route to go,then we define it
run this command on control plane
sudo hostnamectl set-hostname "k8scontrolplaneORyourdomain.example.com"
and edit /etc/hosts
127.0.0.1 k8scontrolplaneORyourdomain.example.com
or public ip
127.0.0.1 999.888.777.666
Ref. https://www.linuxtechi.com/install-kubernetes-on-ubuntu-22-04/
Hi Friends,
Am facing issues while running
kubectl get pods comand
and pods are not created
The connection to the server 172.31.76.42:6443 was refused - did you specify the right host or port?
some time its working and sfter some time automatically getting the same issue
Please help me with resolution steps
below is my server details
The connection to the server 172.31.76.42:6443 was refused - did you specify the right host or port?
root@k8s-master:/etc/cni/net.d# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION=âUbuntu 22.04.2 LTSâ
root@k8s-master:/etc/cni/net.d#
Hi, good afternoon
I am also facing the same issue, was you able to solve the issue
Came to see your post simillar to what I am facing
Like to know how did you manage it later
Thanks, It worked for me.
But it worked only after restart server.
I was using aws ubuntu 22.04 cloud as control plane
Actually , the issue is something else, was able to solve it
I posted how I solved by
when i add >> in etc/ profile
export KUBECONFIG=/etc/kubernetes/admin.conf
source <(kubectl compeletion bash)
~
and resest all services once again issue found temp resolved
hello guys
I was struggling with this problem for about 2 days until I came across this link and my problem was completely solved according to the point mentioned in the link.
https://www.vnoob.com/2022/12/kubectl-6443-connection-refused/
Note that at the end, mix the following two commands:
$ sudo systemctl restart containerd
$ sudo systemctl restart kubelet
- Note: The important point is that at the same time you must have correctly set the settings such as ip address in the /etc/kubernetes/kubelet.conf file.
if you are facing this issues on wsl2 I recommend to follow this tutorial : How to Use Kubernetes For Free On WSL | DevOps
I have been using vagrant/virtualbox to spin up a cluster and kept encountering this error after trying all of the tricks and tips. I was able to get the cluster up after fixing two problems that kept crashing it no matter what I did.
1) Setup the config.toml for ContainerD so that containers would stop crashing.
âKubelet had a hard time figuring out whatâs running versus what should be running and was killing my legitimate pods. The following config fixed the thingâ :
# Content of file /etc/containerd/config.toml
version = 2
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
[plugins."io.containerd.grpc.v1.cri".containerd]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
Source: kubernetes - Kube-apiserver Docker Shutting down, got signal: Terminated - Stack Overflow
2) Before installing Calico, download and edit the manifest and specify the network adapter or IP address to use for the network.
Calico will try to autodetect the ip address and this was an issue for vagrant. It kept choosing the 10.0.2.15 ip and as a result the error would show. There are several ways you can set this, I chose to specify the eth adapter:
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "172.17.0.0/16" #Replace with your pod cidr you initialized with
- name: IP_AUTODETECTION_METHOD
value: "interface=enp0s8" #Replace with your adapter name
Save the calico.yaml file and apply it (kubectl appy -f calico.yaml).
Calico Documentation: Configure IP autodetection | Calico Documentation
Manifest Tested: https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml
Besides this, everything was the same when viewing tutorials on setting up a Kubernetes cluster. For documentation sake, I initialized the cluster using the following command:
sudo kubeadm init --control-plane-endpoint=controller.dev --pod-network-cidr=172.17.0.0/16 --apiserver-advertise-address=192.168.62.12
Note: The apiserver-advertise-address is assigned to the eth adapter specified in Calico and is a static ip for a private vagrant network. I believe Calico needs itâs own network to communicate and you shouldnât use any public bridged networks, but let me know if I am wrong.
I hope this helps anyone, this error haunted me for about a week and a half, but I now have a cluster that works!
Hey even i was facing the same issue, so I scp kubeconfig file from master node to all the worker nodes for admin privileges. After exporting the KUBECONFIG, I faced same issue multiple time then I understood itâs not hitting to the api. I checked the api on the worker node /var/lib/kubelet/kubeconfig ; copied the API from server parameter and replaced the API from the file that i scp from master node. After exporting it worked fine.
this issue is due to the api server restart again and again and api server or containerd restart due to the liveness and readiness probe check your manifests in /etc/kubernetes/manifests/api.server.yaml
It works for me temporarily after:
$ sudo systemctl restart kubelet
kubeadm reset
Use above mentioned command on master & node servers solved all my problems.
Master Server: Kali Linux (VM)
swapoff -a
rm /etc/kubernetes/kubelet.conf
rm /etc/kubernetes/pki/ca.crt
systemctl daemon-reload
systemctl start containerd
systemctl start docker
systemctl enable docker
systemctl start kubelet
systemctl enable kubelet
kubeadm reset
kubeadm init
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Node Server 1: CentOS (VM)
swapoff -a
rm /etc/kubernetes/kubelet.conf
rm /etc/kubernetes/pki/ca.crt
systemctl daemon-reload
systemctl start containerd
systemctl enable containerd
systemctl start docker
systemctl enable docker
systemctl start kubelet
systemctl enable kubelet
kubeadm reset
kubeadm join ENTER_MASTER_SERVER_PUBLIC_IP:6443 --token xjcvh0.zyn31te4go5sj55n --discovery-token-ca-cert-hash sha256:064d364a596f04aca2d15c3544cf9774a66041d46f50a28c78bda4756e9f78c2
Node Server 2: Alma Linux (VM)
swapoff -a
rm /etc/kubernetes/kubelet.conf
rm /etc/kubernetes/pki/ca.crt
systemctl daemon-reload
systemctl start containerd
systemctl start docker
systemctl enable docker
systemctl start kubelet
systemctl enable kubelet
kubeadm reset
kubeadm join ENTER_MASTER_SERVER_PUBLIC_IP:6443 --token xjcvh0.zyn31te4go5sj55n --discovery-token-ca-cert-hash sha256:064d364a596f04aca2d15c3544cf9774a66041d46f50a28c78bda4756e9f78c2
Remove IP Restrictions:
iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
Use above mentioned command on all servers to allow all ips & ports if any error occurs related to connection refused. This is very insecure to allow all ips and ports, use above only for testing purpose.
Use above mentioned link to allow specific ports on master and node servers if any error occurs related to connection refused.
Its Working! Thanks for the valuable information cheers!
@brian_the_wall Did you fix the issue by any chance? Iâve tried recreating the cluster but no luck
It also worked with docker-desktop
kubernetes, just by removing swap
from the resources. Thanks so much!!!