Cluster information:
Kubernetes version: 1..3.7
Cloud being used: bare metal
Installation method: kubespray
Host OS: Rocky Linux 9
CNI and version: Calico v3.30.5 vxlan, with ipvs later nftables. with strictARP.
CRI and version: containerd v2.1.5
IP forwarding was enabled
rp_filter was disabled
This is the story ![]()
I am learning kubernetes, but when configuring my POCs I try to be close to a real life deployment.
For example, in my POC I’ve :
- A Firewall, Opnsense that covers all the platform, it’s the entry point of my POC.
- 1 VM used a deployer (ansible) machine.
- 3 VMs as controlplan
- 3 VMs as workers and rook-ceph cluster.
- I’ve created several networks
- a management network, where I can access the platform with ssh and kubectl, this network is for admins. In my mind this can be accessed from the LAN also.
- a pod to pod network, Calico vxlan and ipvs will use this network.
- a ceph public and ceph cluster networks for rook-ceph
- an external network which is used to publish pod services where kube-vip and ingress will reside to help externalize applications.
My problem is a routing problem, and I am by no means a network engineer.
If I make the default gateway on the mgmt network, pod services will be unreachable, because of the asymmetric routing.
if I make the default gateway on the external network, nodes will be unreachable, because of the asymmetric routing.
I’ve tried use PBR (policy base routing to solve this problem). It worked for sometime, then I started getting weird behaviors.
Without getting into much details, my question :
- Trying to do this, is it an overkill? i.e wanting to separate the mgmt from the external flow. I know that putting the default gateway on the eternal network is much simpler, and I am using that now. But I want to know what is the best practice.
- If the best practice is to separate the routing of the mgmt and the external networks, what is the best to do it?
Regards.