How does load balancing works

Hi there,

I tried to read the docs as much as I can but I think because I’m new to k8s, I cannot understand some words or phrases so I have some questions and there are some how tos. I’m sorry if these amateur questions bother you:(

  1. Does k8s automatically manages load balancing? I mean suppose I have one master and two worker nodes. If I write a simple docker-compose.yml with below contents and I use kompose convert, then the average load of master is 8, first worker is 9 and the second worker is 3.
  2. So if I run kubectl apply -k ., does this only runs on master? Or if would run on the node having the most free resources?
  3. Now suppose I have two master and two worker nodes. Then does the answer changes?
version: 3
services:
  test:
    image: nginx

Services are described here: Service | Kubernetes

If you use LoadBalancer as the service type, I think that is managed by the Cloud Controller Manager (CCM) that was created by the Cloud Platform you’re on. They typically spin up a cloud load balancer.

1 Like

Thanks my friend.

As you said I think that is managed by the Cloud Controller Manager (CCM) that was created by the Cloud Platform you’re on, I’m not on any cloud platform. I’m currently on virtualbox on my laptop but I’ll be soon on bare metal servers.

So since I’m on my own server, does your answer change? Or in this case the Cloud Platform mean my server itself?

It’s further explained here what a CCM is: Cloud Controller Manager | Kubernetes

You probably don’t have one installed.

You can roll your own solution; perhaps following something like an operator pattern and watch for new load balancer type services; there is further documentation with boiler plate here.

OR you can just completely ignore using load balancer type services and use an ingress controller like nginx or traefik. If you go the traefik path, it gives you an entirely additional custom resource to configure named virtual hosts and just path routing in general.

Just to illustrate what a CCM looks like, here are a few from some cloud providers.
AWS
GCP
Azure
Linode
DigitalOcean
Vultr

1 Like

Thanks, I think what I meant is Kubernetes Scheduler | Kubernetes that is written down there.

With the explanations I told, is that right my mean is Scheduler?

Got it, you’re trying to figure out where the test pod ends up being scheduled. Just a side note, if you need to control what node a pod runs on, check out this doc.

Regarding the scheduling, I can’t find an answer on how to directly determine where a pod will end up, but I found this explanation.

If you’re just consuming a cluster, it seems knowing how to control node selection and just trusting that the Kubernetes schedule is making good choices.

You can determine the distribution from the list of pods with kubectl get pods -A -o wide though. So if something looks fishy about the scheduling, it’s at least observable.

Notes From Using kompose

These are just notes I took to show what the k8s YAML ends up being.

Ran kompose convert in Docker Container

[localhost]$ git clone https://github.com/kubernetes/kompose.git
Cloning into 'kompose'...
...
Receiving objects: 100% (19219/19219), 24.13 MiB | 1.76 MiB/s, done.
Resolving deltas: 100% (10360/10360), done.

[localhost]$ cd kompose/

[localhost]$ docker build -t kompose-test:latest . 
[localhost]$ docker run --rm -it kompose-test:latest sh

/ # vi docker-compose.yaml
/ # cat docker-compose.yaml
version: "3"
services:
  test:
    image: nginx

/ # kompose convert
INFO Kubernetes file "test-deployment.yaml" created 


/ # cat test-deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    kompose.cmd: kompose convert
    kompose.version: 1.22.0 (955b78124)
  creationTimestamp: null
  labels:
    io.kompose.service: test
  name: test
spec:
  replicas: 1
  selector:
    matchLabels:
      io.kompose.service: test
  strategy: {}
  template:
    metadata:
      annotations:
        kompose.cmd: kompose convert
        kompose.version: 1.22.0 (955b78124)
      creationTimestamp: null
      labels:
        io.kompose.service: test
    spec:
      containers:
        - image: nginx
          name: test
          resources: {}
      restartPolicy: Always
status: {}
1 Like

Thanks, I read both articles and I’ve learned much from them.

Well you should likely consider looking at the ingress controller for load balancing within the cluster, but you can use Netris to automatically create load balancers external to the pod and distribute traffic across multiple ingresses, or multiple pods.

The loadbalancer can be configured via a CRD.