How to ingress correctly via nginx-ingress-controller and some questions

I’m new to k8s and I’m having slight trouble setting up a local cluster on virtual machines and have it be accessible from at least my host machine.

setup.

  • I’ve a local cluster consisting of three debian virtual machines with 1 master and 2 workers. I had created a virtual interface with CIDR 10.0.0.0/16 and I had set the dhcp domain to be abcd.com.
  • I created my cluster I set clusterDNS: abcd.com, podCIDR: 69.96.0.0/16 and serviceCIDR: 69.97.0.0/16. I am using cilium CNI & coreDNS corefile.
.:53 {
    errors
    health {
       lameduck 5s
    }
    ready
    kubernetes abcd.com in-addr.arpa ip6.arpa {
       pods insecure
       fallthrough in-addr.arpa ip6.arpa
       ttl 30
    }
    prometheus :9153
    forward . /etc/resolv.conf {
       max_concurrent 1000
    }
    cache 30
    loop
    reload
    loadbalance
}
  • I deployed a nginx-pod in default namespace and created a service nginx-svc of type NodePort that exposed nginx-pod on port 80. I created an ingress rule that had a rule to forward abdc.com to nginx-svc.
  • I then installed ingress-nginx-controller

logs.

  • Corefile for coreDNS:
ayush@ip-10-0-191-144:~$ kubectl describe ingress/example
Name:             example
Labels:           <none>
Namespace:        default
Address:
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host           Path  Backends
  ----           ----  --------
  abcd.com
                 /   nginx-svc:80 (69.96.1.111:80)
Annotations:     <none>
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  Sync    30m (x5 over 11h)  nginx-ingress-controller  Scheduled for sync
  • services:
ayush@ip-10-0-191-144:~$ kubectl get svc -A
NAMESPACE       NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
default         kubernetes                           ClusterIP      69.97.0.1       <none>        443/TCP                      2d14h
default         nginx-svc                            NodePort       69.97.216.124   <none>        80:30744/TCP                 26h
ingress-nginx   ingress-nginx-controller             LoadBalancer   69.97.88.241    <pending>     80:31565/TCP,443:32581/TCP   2d2h
ingress-nginx   ingress-nginx-controller-admission   ClusterIP      69.97.200.46    <none>        443/TCP                      2d2h
kube-system     cilium-agent                         ClusterIP      None            <none>        9964/TCP                     2d14h
kube-system     hubble-metrics                       ClusterIP      None            <none>        9965/TCP                     2d14h
kube-system     hubble-peer                          ClusterIP      69.97.190.83    <none>        443/TCP                      2d14h
kube-system     hubble-relay                         ClusterIP      69.97.188.92    <none>        80/TCP                       2d14h
kube-system     hubble-ui                            ClusterIP      69.97.9.207     <none>        80/TCP                       2d14h
kube-system     kube-dns                             ClusterIP      69.97.0.10      <none>        53/UDP,53/TCP,9153/TCP       2d14h
  • pods:
ayush@ip-10-0-191-144:~$ kubectl get pods -A -o wide
NAMESPACE       NAME                                        READY   STATUS    RESTARTS       AGE     IP             NODE              NOMINATED NODE   READINESS GATES
default         dnsutils                                    1/1     Running   1 (11h ago)    11h     69.96.2.47     ip-10-0-94-54     <none>           <none>
default         nginx-pod                                   1/1     Running   4 (11h ago)    26h     69.96.1.111    ip-10-0-38-41     <none>           <none>
ingress-nginx   ingress-nginx-controller-55474d95c5-f4kfx   1/1     Running   4 (11h ago)    2d3h    69.96.1.80     ip-10-0-38-41     <none>           <none>
kube-system     cilium-428bt                                1/1     Running   7 (11h ago)    2d14h   10.0.38.41     ip-10-0-38-41     <none>           <none>
kube-system     cilium-8hpbb                                1/1     Running   4 (11h ago)    2d14h   10.0.191.144   ip-10-0-191-144   <none>           <none>
kube-system     cilium-cxgnc                                1/1     Running   6 (11h ago)    2d14h   10.0.94.54     ip-10-0-94-54     <none>           <none>
kube-system     cilium-operator-6cdc4568cb-gg58q            1/1     Running   39 (11h ago)   2d14h   10.0.38.41     ip-10-0-38-41     <none>           <none>
kube-system     cilium-operator-6cdc4568cb-k7nsd            1/1     Running   4 (11h ago)    2d14h   10.0.191.144   ip-10-0-191-144   <none>           <none>
kube-system     coredns-76f75df574-45f7b                    1/1     Running   9 (11h ago)    2d14h   69.96.0.43     ip-10-0-191-144   <none>           <none>
kube-system     coredns-76f75df574-77m4h                    1/1     Running   9 (11h ago)    2d14h   69.96.0.221    ip-10-0-191-144   <none>           <none>
kube-system     etcd-ip-10-0-191-144                        1/1     Running   11 (11h ago)   2d14h   10.0.191.144   ip-10-0-191-144   <none>           <none>
kube-system     hubble-relay-d8b6b55c9-st57x                1/1     Running   9 (11h ago)    2d11h   69.96.2.197    ip-10-0-94-54     <none>           <none>
kube-system     hubble-ui-6548d56557-b6f49                  2/2     Running   9 (11h ago)    2d11h   69.96.2.93     ip-10-0-94-54     <none>           <none>
kube-system     kube-apiserver-ip-10-0-191-144              1/1     Running   11 (11h ago)   2d14h   10.0.191.144   ip-10-0-191-144   <none>           <none>
kube-system     kube-controller-manager-ip-10-0-191-144     1/1     Running   11 (11h ago)   2d14h   10.0.191.144   ip-10-0-191-144   <none>           <none>
kube-system     kube-proxy-7gc2j                            1/1     Running   4 (11h ago)    2d14h   10.0.94.54     ip-10-0-94-54     <none>           <none>
kube-system     kube-proxy-xt8nc                            1/1     Running   4 (11h ago)    2d14h   10.0.191.144   ip-10-0-191-144   <none>           <none>
kube-system     kube-proxy-xzjjj                            1/1     Running   4 (11h ago)    2d14h   10.0.38.41     ip-10-0-38-41     <none>           <none>
kube-system     kube-scheduler-ip-10-0-191-144              1/1     Running   11 (11h ago)   2d14h   10.0.191.144   ip-10-0-191-144   <none>           <none>

questions.

  1. Even though nginx-ingress-controller shows Running. It doesn’t get an external ip & is stuck on <pending>. I’m not sure why!! (you can see above in service logs)
  2. From any node’s ip, I’m able to access the nginx-svc by doing curl -H 'Host: isitayush.dev' -X 10.0.191.144:31565 but how can I make it such that i can simply reach to my nginx-svc via any of the node’s external ip? (i.e. 10.0.191.144 [my master node ip] should route me to the nginx-svc or nginx-pod)

Can I switch nginx-ingress-controller port 31565 to 80 & 443 or some other port such that I can use the node external ip and still reach to my defined service within the rule?

  1. Is it also possible to reach the nginx-pod by the domain name abcd.com or better yet something different such as pqrs.com externally? Given I own pqrs.com and it has an A record that points to the external load balancer ip or one of the node’s ip for nginx-ingress-controller?

My final goal would be to set an A record on multiple domains that I own to a external load balancer than balances between the ip’s of all nodes or just of the nignx-ingress-controller service and then have nginx-ingress-controller route them to the appropriate service through the ingress rules. Is it possible to do so? I’ll hopefully deploy this on AWS (I want to try to avoid their load balancer service in this setup if possible and instead use a dedicated machine to do so).

  1. Can I make “dhcp domain” & “cluster dns” domain seperate. For example, in my interface settings, I could set abcd.com and when creating a cluster I’ll set pqrs.com?

Thank you!

Cluster information:

Kubernetes version: v1.29.3
Cloud being used: (put bare-metal if not on a public cloud): bare-metal (virtual-machine-manager)
Installation method: kubeadm init --config cluster-config.yaml
Host OS: Linux host 6.1.0-18-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.76-1 (2024-02-01) x86_64 GNU/Linux
CNI and version: cilium v1.15.1
CRI and version: containerd containerd.io 1.6.28 ae07eda36dd25f8a1b98dfbf587313b99c0190bb

Oh god I remember how much time I spent on this as well.

  1. it’s not working because you need something that gives your ingress controller an external IP address such as metalLB: [ingress-nginx/docs/deploy/baremetal.md at main · kubernetes/ingress-nginx · GitHub]

I’m still new to k8s so I cannot give you a full guide, however if you deploy both the ingress controller and the metallb with helm and set the right configurations
(alwyas do a “helm show values repo/app > values.yaml” and then a “helm upgrade --install app repo/app --values values.yaml” to keep a backup, or even better use argocd)
the ingress controller will magically get an external ip address assigned from the pool configured in MetalLB, and thus, every ingress created through the ingress controller will be accessible by that address. (The ingress controller will redirect the traffic to the correct ingress resource based on the path)

The ip addresses of the MetalLB poool must be “normal” ip addresses similar of the ones your vms are using such as 192.168.1.x or, in your case, the ones of your virtual interface 10.0.x.x/16.
(A curiosity, are you using proxmox to create your home lab? In that case remember to check the firewall rules!)

  1. so the full route if it is open in the web will be:
    pqrs.com → router_public_ip_address → router_internal_ip (-> reverse_proxy_internal_ip) → ingress_external_ip_address → k8s_service_ip → k8s_pod_ip

Remember that you need to manually add an “A redirect” from your domain name → to your router external ip address, in your router you need to manually add a redirect → to your ingress external ip for the ports needed (if it’s http and https you will need both port 80 and 443).
Remember to set the correct host inside the ingress configuration, it must be the one defined in your “A redirect” and be the one that the user is trying to access, such as http://pqrs.com.
And of course set the correct service name and port in the ingress configuration.

  1. You can have any domain or subdomain that you want connected to your ingress such as abcd .com or web.abcd.com, provided that you have all the “links” described in point 3.
    By the way, I didn’t change the default configuration of the Corefile in k8s for the ingress to work.

  2. If you just want to access your service connecting to any of the worker node ip (I’m not sure that it works with the master as well), mabe just to test things out, you only need a NodePort:
    → worker_ip:Nodeport → k8s_service_ip:port → k8s_pod_ip:targetPort
    [ClusterIP vs NodePort vs LoadBalancer: Key Differences & When to Use]

If something does not work try to check each step of the flow:

  • Check the POD: Is the pod ip accessible with localhost if I do a “kubectl exec podname -- curl http://localhost:80
  • Check the Service: Is the service ip accessible from a pod in the same namespce if I do a “kubectl exec otherpodname -- curl http://service_ip:80
  • Check the Worker Node: if i change the service from a clusterip to a nodeport can i access from a machine in the same network as the worker node with “curl http://worker_ip:externalNodePort
  • Check the ingress Resource: if i set a custom DNS resolve that matches: domainName → ingressExternalIP, e.g. pqrs.com -> 10.0.x.x, can I access with “curl http://pqrs.com

In addition, what do you mean by “I created an ingress rule that had a rule to forward abcd.com to nginx-svc” ? Because the ingress resource automatically creates the forward rule towards the backend service.
Remember that if you want any external client to access the ingress while looking for the address abcd.com, you need to make that external client aware of the correct ip address, either by changing the DNS resolver for THAT external client (if you have a DHCP server with a DNS resolver such as PFSENSE, in there you can just create an “A REDIRECT” that match the abcd.com domain with the external_ip_address of your k8s_ingress resource, it’s not too complex if you are inside a lab environment) or by changing the DNS resolver of your public domain (by following the full connections described at point 3)

Hope it helps.
Leo

1 Like

Thank you so much Leo! I was able to install MetalLB and now I understand what I was missing and how it fits together. I got everything working and I’m able to access the nginx-pod from my host machine on the domain tuvw.com. To make it work locally (since my nodes were simply debian virtual machines), I had to emulate A records via /etc/hosts on my host machine and I appended this <external_nginx_ingress_ip> tuvw.com random.tuvw.com such that my browser points to the <external_nginx_ingress_ip> when navigating to those domains.

I also deployed another httpd-pod → created a httpd-service → created an new ingress (mapped random.tuvw.com as host). This also works from my host and now I have two services,

  • [ tuwv.com ] —> [ <external_nginx_ingress_ip> → nginx-ingress-controller → nginx-svc → nginx-pod ]
  • [ random.tuwv.com ] → [ <external_nginx_ingress_ip> → nginx-ingress-controller → httpd-svc → httpd-pod ]

the first [] block represents my host debian machine and the second [] represents my virtual machines

I do have a few follow up questions Leo if I may,

  1. In MetalLB’s IPAddressPool which is used for nginx-ingress-controller. Do I have to set this the same as my virtual network CIDR (10.0.0.0/16) or could be it anything random (11.0.24.53, 10.0.52.31 etc.) in both scenarios of bare-metal and of cloud-provider like AWS?
  2. How would I approach something similar to MetalLB in Cloud providers like AWS? Do I have to use their aws-cloud-controller-manager for provisioning a Elastic Load Balancer? What if I want to avoid using cloud providers external load balancer & are there alternatives of MetalLB that are compitable with cloud providers?

Earlier this month, I tried a similar cluster on AWS with aws-cloud-controller-manager to automatically LoadBalancer’s for me but they seem to error a lot in random ways. (EC2 nodes not getting providerID even though I followed every step mentioned in the deployment process for their cloud controller…etc.) & hence I’m seeking alternatives since I was unable to successful setup a cluster with their controller.

(A curiosity, are you using proxmox to create your home lab? In that case remember to check the firewall rules!)

I’m using qemu/kvm to spin up 3 debian virtual machines and I used vrish net-create to create a virtual network interface for use inside my nodes.