Ingress Controller - inside or outside K8s cluster?

Hello all,

I would like to know what you are doing regarding the Ingress Controller.

It’s possible to run them outside the cluster or inside the cluster. Each case has its pros and cons.
For example, running them as Pods in the cluster ease the deployment and the scalabality. However it uses worker nodes resources.

What are you doing in your production env ? are you using dedicated servers/VMs for the ingress controller (like 2 or 3 hosts with vrrp or something like that) ?

This topic is more about talking about pros and cons of each solution, as needs are differents, rather than imposing a unique solution.

Regards all !

In cloud providers with a load balancing option integrated, like alb ingress load balancer and probably something similar to Google, I think offloading that to the provider makes tons of sense (yo can avoid having SSL certificates in cluster, etc.).

I have done that, although used ALB/ELB and service nodeport (instead of Ingress because alb ingress load balancer was not ready and before that because service annotations for type load balancer where a mess, just node port and handle the load balancer from terraform was easier).

For providers where you don’t have a load balancer service, in cluster makes more sense to me as you benefit from kubernetes features (like scaling, availability, etc.). And that (I’m using contour) is working quite well in the clusters :slight_smile:

Thank you very much.

Indeed, offloading it to the cloud provider seems to me the best solution, in case you use Google/Azure, or whatsoever K8s cluster provider.

Ok, I see. For now I’m still using NGINX default Ingress Controller. We’ll probably change for another IC.

So, if I understand well, you have contour running on all your Worker Nodes, right ? What is your entrypoint for them ?
I mean, are you redirecting all the traffic to a specific worker node, or do you have a LoadBalancer before them that split the traffic ?

Regards


pcasis

    August 9

Thank you very much.

Indeed, offloading it to the cloud provider seems to me the best solution, in case you use Google/Azure, or whatsoever K8s cluster provider.

rata:
For providers where you don’t have a load balancer service, in cluster makes more sense to me as you benefit from kubernetes features (like scaling, availability, etc.). And that (I’m using contour) is working quite well in the clusters :slight_smile:

Ok, I see. For now I’m still using NGINX default Ingress Controller. We’ll probably change for another IC.

So, if I understand well, you have contour running on all your Worker Nodes, right ? What is your entrypoint for them ?

More or less. I have contour running on some workers node, not all.

I mean, are you redirecting all the traffic to a specific worker node, or do you have a LoadBalancer before them that split the traffic ?

Oh, that is the best thing: I’m using metalLB (BGP mode with ECMP for load balancing across instances, just talks to the routers) so no need for a “special” worker nor an outside load balancer :).

And using the contour service with external traffic policy set to local, so source IP is preserved.

1 Like

So you have multiple contour instances, that are reachable with metalLB, something like this :

I’m missing the point of using MetalLB rather than a “standard” loadbalancer…
If you cloud explain a little bit. :smiley:

1 Like

The point of metalLB is routing traffic into the cluster. That is a problem that depends on your network how to solve it and metalLB is a nice solution if you can use it on your network.

And as it runs in your cluster, you don’t need to manage something externally not bother for the external load balancer to be HA and that stuff.

1 Like

Ok I see.

With MetalLB, traffic is directly routed into the cluster (as you said) whereas with LoadBalancer, packets are going to the cluster through the LB (so source ip is changed by HA Proxy or Nginx or whatsoever).

I get it now.

Yeah, but a key point, I think, is that is a way within the kubernetes cluster to route traffic to the cluster. It is not a trivial problem.

I think the problem is properly explained here: https://kubernetes.github.io/ingress-nginx/deploy/baremetal/

I would highlight that, IMHO, and not so much the extra hop.

Thank you for the link.

The method with Using a self-provisioned edge would be perfect for us. However there is still the issue with the source ip that is not preserved …

What do you mean? I share how I set it up so it is preserved.

Am I missing something?

1 Like

My bad,
I read this :

*This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore **not recommended** to use it despite its apparent simplicity.*

as part of the Using a self-provisioned edge section

Don’t know what link you refer to. Did I miss it?

In any case, not sure what edge might be in the context of that article. But I told how to do it with metalLB, you can preserve the source IP of the client :slight_smile:

1 Like

The ingress service defines the external load balancer. E.g. on AWS, creating the Ingress service as type: LoadBalancer creates an ELB for you. I believe the equivalent occurs for GCP/GKE as well.

Kubernetes nginx-ingress-controller

It is the K8s service that acquires the static IP address, etc. A bit oddly, imo, the TLS configuration is on the Ingress object and not the service object. But, se la vie.

As a practical matter, when you create an Ingress, e.g. nginx, it includes a Pod deployment for the nginx daemon itself, which runs on containers inside the cluster. I.e. an Ingress always runs inside the cluster. Seems like an external load balancer is something else and not an Ingress, per se.

Took a look. Same goes for Contour:
https://raw.githubusercontent.com/projectcontour/contour/release-1.0/examples/render/contour.yaml

So it just comes down to how one pod (LB) accesses another (your app): directly or via a service. If directly, it has to communicate with the K8s service to maintain the current list of containers for the pod. This is necessary to maintain session affinity. Otherwise, it’s probably best avoided without a strong justification.

I know this is an old thread. Just clarifying this stuff for my own benefit … :–)