Not able to join node to Master

Hello,

I am not able to join Node to Kubernetes master. Earlier I was able to join node to master but I had some issues on master , so I had to reset it. I reset it by using kubeadm reset command and was able to successfully access Kubernetes dashboard.

However, when I am trying to join node to master, I am getting the following error-
error execution phase preflight couldn’t validate the identity of the api server: abort connecting to API servers after timeout of 5m0s

I have also tried to reset kubeadm but still getting the above error.

Please note that this is the second time I am joining node to master.

Any help is greatly appreciated!.

1 Like

@peeyush, was u able to fix issue. Can you explain how you fixed it.

I was not able to fix it. I tried some approach. None of them worked. At last, I have to reinstall Kubernetes on both the VMs.

I have the same issue to re-join a node after drain it. Any idea why this happens?

Use “kubeadm token create” on the master node to create a new valid token. Try again.

I’ve got the same problem and solved like this.
Basically I created a new token in ‘kube-master’ and rejoined using the new ‘token’ and ‘hash’ value in ‘kube-node2’.

kube-master

[root@kube-master ~]# kubectl drain kube-node2 --ignore-daemonsets --delete-local-data
node/kube-node2 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-vl4cl, kube-system/kube-proxy-7phrj
node/kube-node2 drained
[root@kube-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master Ready master 18d v1.16.2
kube-node1 Ready 18d v1.16.2
kube-node2 NotReady,SchedulingDisabled 18d v1.16.2
[root@kube-master ~]# kubectl delete node kube-node2
node “kube-node2” deleted
[root@kube-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master Ready master 18d v1.16.2
kube-node1 Ready 18d v1.16.2
[root@kube-master ~]#

kube-node2

[root@kube-node2 ~]# kubeadm reset
[root@kube-node2 ~]# kubeadm join 192.168.56.2:6443 --token 8hbube.3ovvd50qotfnb8un --discovery-token-ca-cert-hash sha256:5340ec383b25e0c52736970727c4a6f4c8b4ace09c023e1e9e9d26eb037fa9fe
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.4. Latest validated version: 18.09

error execution phase preflight: couldn’t validate the identity of the API Server: abort connecting to API servers after timeout of 5m0s
To see the stack trace of this error execute with --v=5 or higher
[root@kube-node2 ~]#
<== HANG

kube-master

[root@kube-master ~]# kubeadm token create
zfvcf0.domneur62fwy33mx
[root@kube-master ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
zfvcf0.domneur62fwy33mx 23h 2019-11-08T17:17:44+09:00 authentication,signing system:bootstrappers:kubeadm:default-node-token
[root@kube-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed ‘s/^.* //’
5340ec383b25e0c52736970727c4a6f4c8b4ace09c023e1e9e9d26eb037fa9fe
[root@kube-master ~]#

kube-node2

[root@kube-node2 ~]# kubeadm reset
[root@kube-node2 ~]# kubeadm join 192.168.56.2:6443 --token zfvcf0.domneur62fwy33mx --discovery-token-ca-cert-hash sha256:5340ec383b25e0c52736970727c4a6f4c8b4ace09c023e1e9e9d26eb037fa9fe
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.4. Latest validated version: 18.09
[preflight] Reading configuration from the cluster…
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’
[kubelet-start] Downloading configuration for the kubelet from the “kubelet-config-1.16” ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…

This node has joined the cluster:

  • Certificate signing request was sent to apiserver and a response was received.
  • The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.

[root@kube-node2 ~]#
<== SUCCESS

1 Like

add port 6443 on kubrnts master and open it for slaves

Hi Guys, I got the same issue, but looking below the error message it said “To see the stack trace of this error execute with --v=5 or higher”. I did this and it immediately told me the issue, and how to solve it. In short the issue was one that many people probably get:

  • The token being used had expired due to the long time between setting up the original master, and therefore getting the token, and then trying to join the nodes. My actual error along with the suggested fix of ‘Use “kubeadm token create” on the control-plane node to create a new valid token’ was:

I0206 10:21:00.980188 28423 token.go:191] [discovery] Failed to connect to API Server “10.36.1.184:6443”: token id “v8gyxv” is invalid for this cluster or it has expired. Use “kubeadm token create” on the control-plane node to create a new valid token


I hope this helps people in the future.