Performance of deployment stays the same regardless of the number of pods

Hi all!

I’m new to Kubernetes, and I am currently trying to see if there is the nearly linear increase in performance as I add pods to a deployment, that I would expect. As performance metric here I use the simple number of req/sec that the deployment can handle, as measured with both wrk and Locust. For testing I’m using the plain nginx image so to exclude factors like database etc, as I just want to see for now if e.g. 3 pods can handle 3x times the requests a single pod can handle etc.

The problem is that the results I get are roughly the same regardless of whether the deployment has 1 pod or more, and it’s weird because when I run the tests I see, from the logs, that the traffic is load balanced across the pods as expected, so it’s not like the requests go to a single pod or something like that. Any ideas of what could be the bottleneck or anyway the problem?

The cluster has 3 nodes from Hetzner Cloud with 4 cores and 16GB of ram each, and I’ve deployed it with Rancher 2.2.1 (I don’t think that makes any difference though).

Thanks in advance!

You are hitting nginx and it serves a static file, then?

You are not making enough requests to overload one nginx pod, probably. Nginx serving static files is very efficient. Also, if you are trying to overload, maybe your network won’t be enough to overload that.

Have you checked metrics when running with 1 pod? If you can’t overload 1 pods, you won’t be able to overload 3 :-D.

I’d try or another thing that is not so efficient, or flood.io, etc.

In any case. Unless your real world scenario is serving static files from nginx (not sure if you are doing that), trying this load test won’t help you. Nginx scales very well in that case, it’s difficult to overload and won’t tell you anything about your app or anything else :slight_smile:

Hi, thanks for your reply. I see about nginx. However I am now testing with the example Guestbook application so it’s not just a static file, and the behavior is similar. No much difference in the req/sec as I add pods. :frowning:

The request per second will only change if the pod is overload.

Like, if one pod can handle 20 req/second and you send 10 request per second, if you add more pods you won’t see any increase. Because you are still sending 10 request per second (although now it is capable to serve 60req/second).

Try to overload one first, IMHO :slight_smile:

Yeah it makes sense. I have now done tests simulating a lot more users with Locust and with both static and dynamic stuff I could see that latency and failures increase as the number of users grows, unless I scale up the pods. Thanks for your help!

Great! :slight_smile: