NGINX Ingress Controller + front F5 Load Balancer

Hi I’m relatively new to Kubernetes. I have quite some knowledge after reading 5 or 6 books on Kubernetes but I have never built one cluster before. We plan to build a on-premise K8s cluster. I have a question about nginx ingress controller.

In our current architecture we have a F5 load balancer fronting a cluster of nginx web servers, which sits in front of our app servers. We rely on the nginx web servers to handle user authentication. Nginx servers are configured as reverse proxy as well to all the backend app services. We’d like to keep using the same security design by letting nginx handle user authentication.

Our app servers consist of Node.js servers and tomcat and sprint boot servers.

Nginx Ingress Controller seems to play two roles, first as a reverse proxy to route traffic to different services, second as a load balancer.

Does it mean I don’t need a front end load balancer like F5 any more with an Nginx Ingress Controller? If so i’m wondering the security impact. Currently we put F5 in the public and all nginx servers are in the DMZ. Is it safe to leave the nginx sever, as the ingress controller, in the public?

Second question is HA of the nginx server itself as the ingress controller. Can I configure a HA Nginx Ingress Controller?

Or should I still put F5 in front of Nginx Ingress Controller?

Thanks for any insight!

Hi!

Basically, you need direct traffic to your kubernetes cluster. How you do it, it really depends on your setup. But is not a bad idea to have the F5 have the Publix IPs and it direct traffic (to pods or to ingress controller, as you prefer).

If you want F5 directly routing to pods, you will need your services to use type node Port. And then, the F5 know on which port route to every app.

If you want F5 routing to your ingress, then your ingress service has a type nodePort, F5 routes to it and the ingress routes to pods.

To be more precise, the ingress will route to a service (that probably is type cluster IP), then the ingress routes traffic to that IP and kube-proxy does the load balancing. So, answering your question, nginx ingress is not acting so much as a load balancer, but it does that job in conjunction with kube-proxy.

It is safe to leave the ingress to the public, as long as you limit the ports available and you can actually do it (i.e. route traffic to it). That part is not trivial and F5, MetalLB, or something else might come handy. I can later search for a nice link explaining this problem if you want, just let me know.

Regarding your second question, yes, your nginx ingress can have multiple replicas as any deployment. If traffic is routed correctly and handles failures fine, you can just route to several replicas in an HA setup.

And regarding your third question, I think it is it depends. But it is not a band idea at all to do it :slight_smile:

Hope it helps,

Rodrigo

Rata, I’m not sure if you are still active on this discussion board but I hope so.

I have a similar setup but a bit simpler. I want to use an on prom F5 appliance to load balance GCP/Kubernetes cluster that serves up web services (might be nginx but could be something else). However, in my GCP environment, no public IP’s can be used.

Once I create a GKE Deployment with me web servers. 1. What do I need to deploy on the GKE side? 2. What do I need to do to allow my network team to setup the F5 to my cluster?

I have tried this:

  • create gke cluster
  • create nginx deployment
  • expose nginx on port 8080
  • deploy nginx ingress controller with type=internal *external IP’s not allowed
  • create an ingress resource to use nginx ingress controller

What do you think about this deployment? is there another/better way?

Lastly and most important, how do I enable communication from the F5 to the ingress controller that has a private IP instead of a public IP? I found a youtube video about a BigIP controller that is installed on GKE. Is this required? If so, do you have a good how-to deployment guide?

Thanks in advance to anyone that is able to contribute!

The “lastly” part is the most important: if you don’t have public ips, you can’t connect directly. You might need to use a VPN or something. That is the trickiest thing.

Also, do you really want to proxy traffic from on prem to GKE? And not have public IPs on your cluster? I’d really question that, as it is really an unusual requirement, fragile and with many disadvantages.

And if you do something simpler (like just use GCP load balancers instead of on prem LB, etc.), everything is really simple

Well, the private IP’s are within the /8 network of the on-Orem networks. A GCP IP of 10.128.0.0 should be reachable from an on-prem 10.10.0.0 provided the proper routes are in place. What I am prevented from doing is using a public IP that may be accessible from the public net. Of course it’s easy enough to block all outside traffic but the organization flat out blocks the use of public IP’s to ensure mistakes don’t occur. I agree, If I could use public ip’s, things would be so much easier.

Oh, are the private IPs from gcp routable from your on prem network or not? From this message I understand they are, but from the previous it seems they aren’t (you asked: “how do I enable communication from the F5 to the ingress controller that has a private IP instead of a public IP?”)

I’d they are routable, doesn’t google offer a (private) load balancer with a private IP? You can route to that, if that is possible.

Another option (not very nice), you would need to route to k8s nodes on the nodePort, or have some servers (out of the cluster) with fixed IPs doing an iptables forwarding

that’s right…we have dedicated interconnect between on-prem and GCP. I’m very new to GCP and trying to learn as I go so apologies for redundant questions. In an on-prom world, I create my web servers and then create a VIP with pools that define all of the endpoints - then create required firewall changes to allow traffic from client to VIP.

In the GCP/on-prem world, if I’m understanding it properly, I need create nginx cluster (done) and then to expose the deployment. This is where the more I read the more confused I seem to get. I really need to dumb this down for the most basic of deployment. As I understand it, I need to follow these steps to expose the nginx service and expose the deployment - all the pods through a single port or IP.

  1. expose on port 80 and/or 443
  2. create an ingress resource - this has no impact until an ingress controller is created. It defines the where you want traffic to go from a url perspective…~/index.html, ~/images/test.html, etc
  3. create an ingress controller - this is where you define the type of loadbalancer

For number 1, I would do this by executing “kubectl expose deployment test-app --port=80”

For number 2, I have read that helm is the best option for creating an ingress controller…so, I do this by executing “helm install --name nginx-ingress stable/nginx-ingress --set rbac.create=true --set controller.service.annotations.cloud.google.com/load-balancer-type=NodePort

For number 3, I use a yaml file to define rules for the inbound traffic that is handled by the ingress controller. “kubectl apply -f ingress-resource.yaml” SAMPLE yaml below…

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-resource
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: “false”
spec:
rules:

  • http:
    paths:
    • path: /hello
      backend:
      serviceName: test-app
      servicePort: 80

Questions:

  1. are the steps above accurate?
  2. when creating the ingress controller, the options are,
    • ClusterIP: this exposes a private IP to the pods and cannot be used to route external traffic directly - correct?
    • NodePort: this will create a port and expose that port to all pods. Is that port exposed both internally and externally so that a non-gcp load balancer can reach it?
    • Load Balancer: this will create a private IP for the pods and a Public IP for external access. I can leverage annotations to assign a private IP for the EXTERNAL-IP. Can I use this option to have an on-prem load balancer proxy client traffic to the GCP EXTERNAL-IP (private IP)? OR, since the traffic is only between our internal RFC-1918 networks, can I omit the use of the EXTERNAL-IP and target the CLUSTER-IP? I want to think the answer is no because the CLUSTER-IP is for GKE internal routing only.

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-app NodePort 10.8.14.233 80:30030/TCP 8m4s