Worker node cant joon admin node

Cluster information:

Kubernetes version: v1.15.0
Cloud being used: bare-metal
Installation method: kubeadm
Host OS: ubuntu desktop 18
CNI and version: flannel
CRI and version: docker

I had network problems with my cluster so I ran kubeadm reset on both nodes. Cleared ip tables and ip links. Ran kubeadm init on master and everything seemed fine. When I tried to join the cluster on the worker node i get the following error:

Error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to decode cluster configuration data: no kind “ClusterConfiguration” is registered for version “” in scheme “

Were you able to fix this issue… I am running into the same issue…

I was able to resolve this by reinstalling k8s components on the worker node:

on nodes:
sudo yum remove kubeadm kubectl kubelet
sudo yum install kubeadm kubectl kubelet

Then I was able to join the cluster using kubeadm

I did fix this. The problem was that nodes had different kubelet versions. Updated all nodes and then they could join.

Great, thanks for sharing the solution :slight_smile:

Hi All,

i am still getting issue after re installing kublet on nodes. This error i am getting when run kubeadm upgrade plan on master