I’m new to Kubernetes, and I am currently trying to see if there is the nearly linear increase in performance as I add pods to a deployment, that I would expect. As performance metric here I use the simple number of req/sec that the deployment can handle, as measured with both wrk and Locust. For testing I’m using the plain nginx image so to exclude factors like database etc, as I just want to see for now if e.g. 3 pods can handle 3x times the requests a single pod can handle etc.
The problem is that the results I get are roughly the same regardless of whether the deployment has 1 pod or more, and it’s weird because when I run the tests I see, from the logs, that the traffic is load balanced across the pods as expected, so it’s not like the requests go to a single pod or something like that. Any ideas of what could be the bottleneck or anyway the problem?
The cluster has 3 nodes from Hetzner Cloud with 4 cores and 16GB of ram each, and I’ve deployed it with Rancher 2.2.1 (I don’t think that makes any difference though).
Thanks in advance!