Asking for help? Comment out what you need so we can get more information to help you!
Cluster information:
Kubernetes version: 1.33.7
Cloud being used: bare-metal
Installation method: kubespray v2.29
Host OS: Rocky linux 9
CNI and version: Calico v3.30.5
CRI and version: Containerd v2.1.5
I am learning kubernetes, my POC is made like this :
- All my nodes Are VMs.
- A deployer VM, used to deploy the cluster.
- 6 nodes cluster , 3 as control plane and 3 as workers.
- All nodes are behind a firewall that play the role of external gateway, I am using Opnsense.
- The firewall sees all networks that will be mentioned later.
- each node has two nics: eth0 and eth1 and I partitioned my network like this :
- eth0 : 10.70.0.0/16 for cluster management kubectl uses this network.
- eth0.80 : vlan80, 10.80.0.0/16, used for external publication using loadbalancers like kubevip, this network has the default gateway (the firewall)
- eth1.90 : vlan90, 10.90.0.0/16 for pod communication with calico/vxlan
- eth1.100 : vlan100, 10.100.0.0/16, will be used for rook ceph public later.
- eth1.110: vlan110, 10.110.0.0/16, will be used for rook ceph cluster later.
- All cluster nodes have IPs on each network, they can ping each other on every network.
As mentioned only the external network eth0.80 has default gateway, the other networks do not have any routing for now.
The cluster was deployed, and it seems working, but…
- Is my network topology correct, complete?
- Does pods need to contact kubernetes api? if yes should I configure routing between the mgmt network and the pod network?
I really having hard time understanding the network part on kubernetes…
Regards.