Container has net.ipv4.ip_forward enabled but when run it is disabled

I’ve noticed when working with wireguard and openvpn that containers, which have net.ipv4.ip_forward enabled by default, when run end up with it disabled.

To be more clear, I noticed ip_forward was disabled in an openvpn container I was using so I created a Dockerfile used ‘FROM ’ and then added ‘RUN sysctl net.ipv4.ip_forward’, and when I performed the ‘docker build .’ I could see output showing net.ipv4.ip_forward was enabled.

So, if it is enabled by default, why is it getting disabled? After researching I think kubernetes is disabling it. I found these:

In wireguard and openvpn, after its running I can usually exec in and enable ip_forward, or use a command to run it when the container starts up, but I wonder … is the intended way to use taints and tolerances to target a particular node, enable ‘ip_forward’ sysctl on that node via the kubelet config, then add it to the deployment as a securityContext. And if all that is done, then it won’t be disabled by default and there will be no need to add an init container or whatnot to re-enable it?

I think yes, this is the right idea, but I haven’t been able to get it to work … could someone help with the right technique to work with those allowedUnsafeSysctls? Also, can someone confirm it is kubernetes that’s disabling ip_forward even though it came enabled in the container?

Cluster information:

Kubernetes version: v1.25.3
Cloud being used: bare-metal
Installation method: kubeadm
Host OS: centos 9 stream
CNI and version: calico v3.23.3
CRI and version: containerd://1.6.8

Using a privileged init container works to set ip_forward, but I can’t help but feel like it’s being disabled for a reason and that there is a better way to be doing this.

  initContainers:
  - args:
    - -c
    - sysctl -w net.ipv4.ip_forward=1
    command:
    - /bin/sh
    image: busybox:1.29
    name: sysctl
    securityContext:
      privileged: true