hi,
When i create a loadbalancer that serve pods spread across workers,then this loadbalancer service will exists on those workers as well?
Behind of endpoint is a pod?I mean service->endpoint-pod
If i have 4 pods then in endpoint i mut have 4 ip.
Load-balancers in kubernetes is not a simple subject. It depends on the cloud environment, how the specific service is configured, how the nodes are configured and more.
If you have a specific question, we can try to answer, but if you just want to learn how it all works, I would ask that you start with one of the many recorded talks, from myself or many other people, which you can find on YouTube.
Just to guid me correctly.
Loadbalancer and ingress services run outside the kubernetes cluster(outside the nodes) they run in cloud.right or on-premises.right?
ex.
kind: Service
apiVersion: v1
metadata:
name: hello-world
namespace: example
annotations:
service.beta.kubernetes.io/brightbox-load-balancer-healthcheck-request: /
spec:
type: LoadBalancer
selector:
app: hello-world
ports:
- name: http
protocol: TCP
port: 80
targetPort: web
kubectl apply -f yaml(above)->create a loadbalancer service.
i should use cloud controller and this will create a loadbalancer service outside the nodes?
The service will run on control plane?
The same ingress controller create ingress service.Where run this ingress,outside the nodes?
Loadbalancer and ingress services run outside the kubernetes cluster(outside the nodes)
Not necessarily. It depends on which environment you are in and how it is configured. I’m not trying to be difficult, but the answer really is “it depends”.
If you use Google Cloud, the default Service LB controller is to use cloud load-balancers, with nothing special on the nodes.
If you use Kubernetes on-prem there are different LB implementations depending on how your network is managed. For example, some use BGP, which run an on-node agent.
If you use Google Cloud, the default Ingress controller is to use cloud load-balancers, with nothing special on the nodes.
But you can CHOOSE to run nginx or some other proxy in your cluster, which is configured differently.
I think you’re looking for a single answer, but that’s not how this works. In order to work in so many different environments, Kubernetes has a lot of flexibility at this very low level.
Again, I would point you toward recorded talks, mine or others, which go into the details. There’s also LITERALLY a book on this: Networking and Kubernetes [Book]
Tim