I would like to know what you are doing regarding the Ingress Controller.
It’s possible to run them outside the cluster or inside the cluster. Each case has its pros and cons.
For example, running them as Pods in the cluster ease the deployment and the scalabality. However it uses worker nodes resources.
What are you doing in your production env ? are you using dedicated servers/VMs for the ingress controller (like 2 or 3 hosts with vrrp or something like that) ?
This topic is more about talking about pros and cons of each solution, as needs are differents, rather than imposing a unique solution.
In cloud providers with a load balancing option integrated, like alb ingress load balancer and probably something similar to Google, I think offloading that to the provider makes tons of sense (yo can avoid having SSL certificates in cluster, etc.).
I have done that, although used ALB/ELB and service nodeport (instead of Ingress because alb ingress load balancer was not ready and before that because service annotations for type load balancer where a mess, just node port and handle the load balancer from terraform was easier).
For providers where you don’t have a load balancer service, in cluster makes more sense to me as you benefit from kubernetes features (like scaling, availability, etc.). And that (I’m using contour) is working quite well in the clusters
Indeed, offloading it to the cloud provider seems to me the best solution, in case you use Google/Azure, or whatsoever K8s cluster provider.
Ok, I see. For now I’m still using NGINX default Ingress Controller. We’ll probably change for another IC.
So, if I understand well, you have contour running on all your Worker Nodes, right ? What is your entrypoint for them ?
I mean, are you redirecting all the traffic to a specific worker node, or do you have a LoadBalancer before them that split the traffic ?
Indeed, offloading it to the cloud provider seems to me the best solution, in case you use Google/Azure, or whatsoever K8s cluster provider.
rata:
For providers where you don’t have a load balancer service, in cluster makes more sense to me as you benefit from kubernetes features (like scaling, availability, etc.). And that (I’m using contour) is working quite well in the clusters
Ok, I see. For now I’m still using NGINX default Ingress Controller. We’ll probably change for another IC.
So, if I understand well, you have contour running on all your Worker Nodes, right ? What is your entrypoint for them ?
More or less. I have contour running on some workers node, not all.
I mean, are you redirecting all the traffic to a specific worker node, or do you have a LoadBalancer before them that split the traffic ?
Oh, that is the best thing: I’m using metalLB (BGP mode with ECMP for load balancing across instances, just talks to the routers) so no need for a “special” worker nor an outside load balancer :).
And using the contour service with external traffic policy set to local, so source IP is preserved.
The point of metalLB is routing traffic into the cluster. That is a problem that depends on your network how to solve it and metalLB is a nice solution if you can use it on your network.
And as it runs in your cluster, you don’t need to manage something externally not bother for the external load balancer to be HA and that stuff.
With MetalLB, traffic is directly routed into the cluster (as you said) whereas with LoadBalancer, packets are going to the cluster through the LB (so source ip is changed by HA Proxy or Nginx or whatsoever).
*This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore **not recommended** to use it despite its apparent simplicity.*
as part of the Using a self-provisioned edge section
In any case, not sure what edge might be in the context of that article. But I told how to do it with metalLB, you can preserve the source IP of the client
The ingress service defines the external load balancer. E.g. on AWS, creating the Ingress service as type: LoadBalancer creates an ELB for you. I believe the equivalent occurs for GCP/GKE as well.
It is the K8s service that acquires the static IP address, etc. A bit oddly, imo, the TLS configuration is on the Ingress object and not the service object. But, se la vie.
As a practical matter, when you create an Ingress, e.g. nginx, it includes a Pod deployment for the nginx daemon itself, which runs on containers inside the cluster. I.e. an Ingress always runs inside the cluster. Seems like an external load balancer is something else and not an Ingress, per se.
So it just comes down to how one pod (LB) accesses another (your app): directly or via a service. If directly, it has to communicate with the K8s service to maintain the current list of containers for the pod. This is necessary to maintain session affinity. Otherwise, it’s probably best avoided without a strong justification.
I know this is an old thread. Just clarifying this stuff for my own benefit … :–)