hello,
I m facing some tricky thing, I have 2 kubernetes, one at A city and the other at B city.
and have VPN connected.
it’s VM and I snapshot before work node join.
my kubeadm kubectl, kubelet are both 1.14.0, and I use flannel as pod network.
master have public IP address “A”, node have public IP “B”
the work node can join the cluster by master node internal IP. but not able to join by master’s public IP.
I capture the packet when the join cluster by internal, and work node just need to communicate with master port 6443.
my work node able to telnet master port 6643 by master’s public IP.
I tcpdump and found there have network traffic between master (6443) and work node, but the work node screen hang
thanks for the reply
I follow the doc procedure, and when I using kubeadn join …
it tell me
Found multiple CRI sockets, please use --cri-socket to select one: /var/run/dockershim.sock, /var/run/crio/crio.sock
so I add --cri-socket /var/run/crio/crio.sock at the end of the kubeadm join command
but still not able to join the cluster:
I really want to know, if there any different setting when the work node join the cluster by public IP? since everytime I join the cluster by using master internal IP, but this time it’s really puzzled me
thanks ~~~~
correct, just hang there, and the token is get from kubeadm token list, still valid, not expired.
I follow the procedure at master and work node both, since the master is already init, I am not sure does that impact
Apr 03 14:49:21 K8S-Slave kubelet[1887]: F0403 14:49:21.474959 1887 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file “/var/lib/kubelet/config.yaml”, error: open /var/lib/kubelet/config.yaml: no such file or directory
Apr 03 14:49:21 K8S-Slave systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Apr 03 14:49:21 K8S-Slave systemd[1]: kubelet.service: Unit entered failed state.
Apr 03 14:49:21 K8S-Slave systemd[1]: kubelet.service: Failed with result ‘exit-code’.
Apr 03 14:49:31 K8S-Slave systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Apr 03 14:49:31 K8S-Slave systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
– Subject: Unit kubelet.service has finished shutting down
– Defined-By: systemd
– Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
– Unit kubelet.service has finished shutting down.
Apr 03 14:49:31 K8S-Slave systemd[1]: Started kubelet: The Kubernetes Node Agent.
– Subject: Unit kubelet.service has finished start-up
– Defined-By: systemd
– Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
– Unit kubelet.service has finished starting up.
– The start-up result is done.
it seems work node not able to get the file from master ?
some node, if I change the master IP from public IP to private IP, it work, and I am pretty sure the master port 6443 is open on the public IP.
If it works using the private IP then there might be some configuration setting with the public ip that needs looking at. I would be hesitant to expose the kube-api to public access though, unless it’s necessary.