Question about network traffic patterns(characteristics) in K8S applications

Hi everyone (●’◡’●)

Question: I am curious about the traffic patterns in-between pods (for example, the characteristics of a TCP connection from one service in container A to another service in container B). Will long-live or short-live TCP connections dominate the pod to pod communication?

Related Question: I find it’s very hard to find such studies. And I also cannot find any open traffic data set measured from real deployed K8S applications. o((>ω< ))o

What I can find for now are open-source microservices demo applications with a load generator.

My Experiment: I have deployed the Google Online Boutique microservice demo in my K8S cloud with 1 master and 2 workers. It comes with a load generator. So I have measured its traffic patterns. I have the following finds

  • Frontend service will also use one long-live TCP connection to communicate with each backend service. This connection will be initiated when I start a new replica of frontend, and will never expire (during my short experiment).
  • The same pattern is also shown between CartService and Database (Redis).
  • On the other hand, CheckoutService will always start 9 new short TCP connections when load generator triggers a checkout action to the frontend. (There is no direct communication between load generator(i.e., user) and the checkout service)

I am wondering is this a typical pattern in K8S applications? Will the real-world K8S applications have the same pattern?

Any comments would be greatly appreciated (●’◡’●)

Hi Kaiyu,

I don’t think there is any specific network traffic pattern on Kubernetes.
It purely depends on the workload you are putting on it.

If the workload is a web server with a DB backend then you are most likely going to have long lived connections from the web server layer to the database layer.

Kubernetes, pods, or containers aren’t really imposing any specific traffic pattern. At the end of the day it’s still running the same processes if you ran outside of Kubernetes.

I think this question is more about micro services vs non-micro services.

The code you are using has a service architecture:

It looks like the CheckoutService queries a number of other services in order to process a checkout.
I am assuming they are short TCP connections since they are most likely HTTP requests to those services.

I think micro services will most likely have:

  • A long-lived connection between a service and it’s corresponding database.
  • Multiple short-lived connections (probably HTTP) between services when a request is being processed.

Kind regards,

Hi Stephen,

Thanks for your reply! I measured the traffic pattern of another application Sock-Shop-Demo recently. It shows exactly the same pattern as you think.

  • Each database maintains a long-lived TCP connection.
  • While requests go-between each back-end services use short TCPs.

Then, I have a following up question. Since back-end services communicate with each other frequently. Is there a specific reason that they do not maintain just one long-live TCP to each other?

For example, in the Google-Boutique-Demo, although the CheckoutService creates many short TCPs to send requests, the FrontendService actually uses only one long-live TCP to send requests, no matter how many users it served. So, it seems possible to achieve this.

Cheers ~( ̄▽ ̄)~*

Hi Kaiyu,

Just highlighting again that nothing is enforced by Kubernetes.
This totally depends on the the implementation of whatever framework/protocol you are using.

If communication is HTTP based then will typically be short lived HTTP requests between components.
I haven’t used something like gRPC, but I think that uses long lived connections.

Kind regards,

Hi Stephen,

Thanks for your response. I think my curiosity now should be downgraded to “why some microservices in k8s prefer(use) short-lived TCP connections than love-lived TCP connections”. It likes to be a question related to the design philosophy in container applications.

BTW, your guess about gRPC is not accurate. Based on my measurement, all connections in the Google-Demo are maintained by gRPC, but short TCP connections are still the majority.