Hi,
I hope I am not out of topic here.
I thought that the communication between pods running on different nodes should work out of the box, but unfortunately, for some reason it isn’t.
I have very basic on-premise 3 Ubuntu hosts with Kubernetes running on them (1 plane-control node and 2 worker nodes). On this cluster, I installed Calico networking with the most basic installation without any custom options (using Helm to install the operator, I am not showing that for simplicity). The connection between the pods running on different nodes works as expected using the default NIC added by Calico.
However, the pods running on different nodes cannot communicate with each other using the NIC added by Multus, while the pods on the same nodes can communicate.
As shown in the quick guide, I installed the thick (also the thin) versions, and added the following resources:
cat <<EOF | kubectl create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf
spec:
config: '{
"cniVersion": "0.3.0",
"type": "macvlan",
"master": "eth0",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "192.168.1.0/24",
"rangeStart": "192.168.1.200",
"rangeEnd": "192.168.1.216",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "192.168.1.1"
}
}'
EOF
And the pods:
# This first pod is added to avoid getting the same IP for samplepod1 and samplepod2, after running the 3 and assigning the IP, I deleted this samplepoddummy.
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: samplepoddummy
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf
spec:
nodeSelector:
kubernetes.io/hostname: "worker-01"
containers:
- name: samplepoddummy
command: ["/bin/ash", "-c", "trap : TERM INT; sleep infinity & wait"]
image: alpine
EOF
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: samplepod1
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf
spec:
nodeSelector:
kubernetes.io/hostname: "worker-01"
containers:
- name: samplepod1
command: ["/bin/ash", "-c", "trap : TERM INT; sleep infinity & wait"]
image: alpine
EOF
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: samplepod2
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf
spec:
nodeSelector:
kubernetes.io/hostname: "worker-02"
containers:
- name: samplepod2
command: ["/bin/ash", "-c", "trap : TERM INT; sleep infinity & wait"]
image: alpine
EOF
Trying to ping the IP (given by Multus) of samplepod2 from samplepod1 does not work (because they are on different nodes).
Am I missing anything to get this very simple setup working?
Here is the output if ip addr
:
root@control-plane-01:~/whereabouts# kubectl exec samplepod1 -- ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1430 qdisc noqueue state UP
link/ether 3a:ad:d0:b3:aa:56 brd ff:ff:ff:ff:ff:ff
inet 10.244.171.6/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::38ad:d0ff:feb3:aa56/64 scope link
valid_lft forever preferred_lft forever
5: net1@tunl0: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 72:9f:47:9d:01:91 brd ff:ff:ff:ff:ff:ff
inet 192.168.2.227/28 brd 192.168.2.239 scope global net1
valid_lft forever preferred_lft forever
inet6 fe80::709f:47ff:fe9d:191/64 scope link
valid_lft forever preferred_lft forever
root@control-plane-01:~/whereabouts# kubectl exec samplepod2 -- ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1430 qdisc noqueue state UP
link/ether 82:b9:06:bf:77:bd brd ff:ff:ff:ff:ff:ff
inet 10.244.37.196/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::80b9:6ff:febf:77bd/64 scope link
valid_lft forever preferred_lft forever
5: net1@tunl0: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 62:43:d9:7d:72:eb brd ff:ff:ff:ff:ff:ff
inet 192.168.2.226/28 brd 192.168.2.239 scope global net1
valid_lft forever preferred_lft forever
inet6 fe80::6043:d9ff:fe7d:72eb/64 scope link
valid_lft forever preferred_lft forever
I also tried using Flannel networking, but with the same issue, unfortunately. I also tried to assign the IPs using whereabouts
, but that made no difference (of course, the IPs were assigned uniquely across the cluster, but communication between pods on different nodes did not work).
The last thing to mention (I don’t know if this makes any difference), I am using Hetzner VPSs.
There is a corresponding GitHub ticket, but it hasn’t got attention yet.