VXLAN encapsulation


I have deployed a k8s cluster with microk8s v 1.20 which ships with calico version 3.13.2
All my nodes are in the same physical subnet and for this reason I would like to reduce the encapsulation overhead of vxlan.

I did set CALICO_IPV4POOL_VXLAN to Never in /var/snap/microk8s/current/args/cni-network/cni.yaml
I have restarted all calico-node pods
kubectl delete pod -n kube-system -l k8s-app=calico-node
and also the daemonset
kubectl rollout restart daemonset calico-node -n kube-system

If I look at the description of all the calico-node pods and of the daemonset I see

Is this a correct way to set this parameter? How can I check what encapsulation mode is actually in place and the packet flow in my network?

My ultimate goal would be to improve the pod-to-pod networking performance.
Unfortunately, even after changing the VXLAN encapsulation, my ping time across pods is still 2x / 3x that of bare metal.

Any pointers to address this issue and to improve networking in general are highly appreciated, thank you