I’m running Ubuntu 19.10 in VmWare Player with bridged networking. When I install kubeadm with
sudo apt install kubeadm -y
a short time later I lose network connectivity to other hosts on my network (physical hosts, not VMs) and to the internet. I also notice that the file
/etc/cni/net.d/10-flannel.conflist
has been created. That file did not exist immediately after installing kubeadm. Since the kubelet service is running maybe that service created that file. (Installing kubeadm also installs kubectl and kubelet.) If I remove kubeadm, kubectl, and kubelet and reboot the VM I have normal network connectivity.
Does kubeadm not run in Ubuntu 19.10, or is this likely a bug?
Do you mean your Ubuntu VM looses connectivity? Or the host machine?
Just one think to check is that the IP range used by docker or K8s does not overlap with your physical network range. Traffic meant for external might be getting routed in to the cluster and going nowhere.
Kind regards,
Stephen
I mean my Ubuntu 19.10 VM loses connectivity. I don’t know about the host OS (Win10).
My other hosts are all 192.168…. addresses so there shouldn’t be any incorrect routing.
I’m really not that familiar with using kubeadm but docs suggest default CNI is Calico which is using 192.168.0.0/16 as the pod network. If thats right it might explain the symptoms you are seeing.
When connectivity is not working you could try checking the routing table and interfaces on the Ubuntu VM to see if 192.168.x.x traffic is being routing on a port other than the bridged connection:
ip route
ip addr
It installs flannel as the default (/etc/cni/net.d/10-flannel.conflist), and I haven’t been able to get to the point where I assign a pod network because the network goes dead.
Are you following a specific guide on a website?
Or do you have the minimal steps to reproduce?
If so could you share and I will run through on Ubuntu 19.10 and see if I get the same problem?
Kind regards,
Stephen
I followed these instructions. Those instructions did work on multiple Ubuntu 18.04 hosts, and I also got them to work on a new Ubuntu 19.10 vm. The Ubuntu 19.10 vm that I was using must have had some other problem.
I followed the guide on a fresh install of Ubuntu 18.04 server.
I installed the packages without any issues.
Was able to initialize a master node with kubeadm.
Then I applied an overlay as instructed in the guide. I copied the apply command from the Flannel github.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
After the pods started I had an additional network interface cni0
:
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 192.168.0.1 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::14d3:9dff:fecc:7c63 prefixlen 64 scopeid 0x20<link>
ether 16:d3:9d:cc:7c:63 txqueuelen 1000 (Ethernet)
RX packets 670 bytes 18760 (18.7 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 14 bytes 1116 (1.1 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:9c:32:f7:b4 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.0.201 netmask 255.255.255.0 broadcast 192.168.0.255
inet6 fe80::20c:29ff:fe86:f48a prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:86:f4:8a txqueuelen 1000 (Ethernet)
RX packets 270884 bytes 389263526 (389.2 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 128380 bytes 10209810 (10.2 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
stephen@ubuntu1804:/var/log$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 ens33
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 ens33
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
That interface is using 192.168.0.1/24
which is conflicting with my home network gateway which is also 192.168.0.1
So now that VM is having connectivity issues similar to what you described.
I think the problem is that installation guide gives the following:
sudo kubeadm init --pod-network-cidr=192.168.1.90/16
Thats a /16
range so includes 192.168.x.x
which is what both of us have for our networks.
I think you need to use a different network for --pod-network-cidr
that doesn’t conflict with your network.
Kind regards,
Stephen