What could be reason of this periodic drop in throughput in my application when deployed on Kubernetes?

I have multiple microservices talking to each other over network. When these microservices are deployed on Kubernetes, my application is experiencing a periodic drop of throughput on loadtesting, where I am pinning one of the microservice to one core of the CPU and saturating it to 100%. Note that all the pods are on the same node. The time series throughput plot is as follows: [Time series throughput of the above setup when load test performed for 10 min]

I have tried 3 setups:

  1. Running the microservices application on baremetal with communication on localhost
  2. Running the microservices on different pods but with host Networking but pods on same node
  3. Running the microservices on different pods but without host Networking but pods on same node

The throughput in the first case is highest. For 2nd case, it is almost 95% of the first case which is acceptable. But 3rd case is the one, for which I am seeing a periodic drop of throughput every few seconds.

What could be the reason for this? Is there some queue that is getting full or is it some configuration issue?

Note: The microservices are simeple client server application built using C++ and cpprestsdk and using redis as a DB. The images of these have a base image of ubuntu.

Cluster information:

Cluster information: Kubernetes version: 1.26.3
Cloud being used: Bare-Metal
Installation method: kubeadm
Host OS: Ubuntu 20.02
CNI: Calico