Cluster communication


#1

Hi Folks,

I have kube cluster which was deployed via kops. There is a one master server & 6 hosts. The mster server was rebooted few days back and new master was appointed/create by kops . However now I am seeing following errors in cluster dump. Appreciate if you could advice me how to resolve these. Due to this communication between pods are not working as expected.

E0207 05:59:28.845507 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:189: Failed to list *v1.Endpoints: Get https://100.64.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 100.64.0.1:443: getsockopt: no route to host

E0204 22:54:14.963385 1 autoscaler_server.go:86] Error while getting cluster status: Get https://100.64.0.1:443/api/v1/nodes: dial tcp 100.64.0.1:443: getsockopt: connection refused

Thanks


#2

Kops uses an ASG for masters. And I think instances terminate when you stop them. But maybe reboot is handled different? I don’t know.

What if you do a snapshot of the EBS things and stop the master that you rebooted? Also, are EBS for etcd, etc. Mounted in the new master?

Probably something that works is just stop all the masters, the ASG will create them again and properly mount the volumes.