Overlapping port in hostNetwork: true

Hi,

We want to run multiple ingress controller on specific node with hostNetwork: true, Is any way to use overlapping port on same node using hostNetwork: true ?

Regards,
Jebin J

How would that work? If you have 2 programs using the same port, which one receives the incoming traffic?

No.

Unless you use some sock opt to allow it, but Iā€™m not sure what would you expect of that.

Our request.

In our case ingress will run on two node which having label as edge and only edge node having external communication.

And these ingress pods are using host network to avoid additional packet forwarding to ingress pod. In these cases package received in the edge node directly given to ingress pod.

We are having one more requirement to run multiple ingress pods for each external NW. In this case port 443 needed for both ingress running on same node but ingress ( nginx) will bind only on external IP not 0.0.0.0*.

How to achieve this ? Is any alternative solution for this ?

Sorry, not follow. You have some ingress you want to use hostnetwork with. And what other thing using the same port? And usiothe same IP too?

We want to run 2 ingress controller on same node( not replica) for network segregation

Oh, then unless you bind the to the correct IP (if they use different IP addresses and ingress supports that), it might work.

You can use the downward API, maybe, to expose host IP values? If that works (using the downward api for that), it might be possible if the ingress supports binding to specific addresses (but donā€™t use ingress, so I donā€™t really know about ingress).

In our case ingress having capability to bind on specific external IP.

Hi,

Below is our sample IPā€™s on edge node. Consider eth2 & eth3 having external access, and eth1 is overlay network

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:51:e4:a3 brd ff:ff:ff:ff:ff:ff
inet 172.16.2.20/24 brd 172.16.2.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe51:e4a3/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:57:c4:eb brd ff:ff:ff:ff:ff:ff
inet 172.16.4.20/24 brd 172.16.4.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe57:c4eb/64 scope link
valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:08:82:f0 brd ff:ff:ff:ff:ff:ff
inet 172.16.3.20/24 brd 172.16.3.255 scope global eth2
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe08:82f0/64 scope link
valid_lft forever preferred_lft forever
5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:22:a0:15 brd ff:ff:ff:ff:ff:ff
inet 72.16.5.20/24 brd 192.168.10.255 scope global noprefixroute dynamic eth0
valid_lft 61472sec preferred_lft 61472sec
inet6 fe80::f816:3eff:fe22:a015/64 scope link
valid_lft forever preferred_lft forever

In ingress having option (externalIPs) to pass specific IP to bind . here I can pass 172.16.3.20 for one deployment and 72.16.5.20 for another deployment.

Ingress (nginx) will try to bind port 80 & 443 on the node with Address 172.16.3.20 & 72.16.5.20, This required hostNetwork: true, Please correct me if iā€™m wrong.

Below is our deamonset configuration, When i tried locally first deployment is success but for second deployment pod itself not creating.

ports:
- containerPort: 80
  hostPort: 80
  name: http
  protocol: TCP
- containerPort: 443
  hostPort: 443
  name: https
  protocol: TCP

hostNetwork: true

Regards,
Jebin J

Sorry, donā€™t follow what is the problem.

Using hostNetwork you see all the host interfaces, right? Canā€™t you bind to the IP you want? Or does it fail in some way?

Hi,

pod is in pending state and getting below log

Warning FailedScheduling 38s (x25 over 63s) default-scheduler 0/9 nodes are available: 1 node(s) didnā€™t have free ports for the requested pod ports, 1 node(s) had taints that the pod didnā€™t tolerate, 7 node(s) didnā€™t match node selector.

Oh, so maybe the scheduler fails to allocate using hostPort? Makes sense, though, as the scheduler doesnā€™t know which IP you will bind to.

What if you specify different hostPorts but use hostNetwork and bind to IP1:80 on one and IP2:80 on the other? Would that work-around the scheduler not knowing about that? Have you tried it?

1 Like

Thanks Iā€™ll try this

Just another general suggestion to get around having to use hostport. You could use metallb for your external IPs and update the daemonset with a node restriction to strictly run on your network ingress nodes. You could then run your ingress controller with a similar node-restriction or affinity so that they only run on the nodes with an instance of the metallb speaker.

In that scneario your loadbalancer services can bind to your external IPs/ports without having to expose something with host networking and overhead would be very small since theyā€™d be running on the same set of hosts.

Hi,

Having some doubt with metallb, Could you please clarify ?
In our case we are having multiple external IPā€™s from different NW , each external NW are isolated in this case how metallb route the traffic to outside ? do we need any specific router/switch outside the cluster ?

How packet will route from metallb to ingress control ?

You can assign it multiple ip pools. By default in layer2 mode it will ARP/NDP announce across all interfaces and ā€˜bindā€™ where it can. If the ingress pod and metallb speaker are on the same node, itā€™ll act closer to a virtualIP on the box than a loadbalancer.

If you are using hostNetwork, you donā€™t need hostPort at all. Youā€™re right that the scheduler doesnā€™t know how to multiplex hostPorts based on IPs. Once you go to hostNetwork you are in control, you donā€™t need the scheduler to help unless you are competing with others who use hostPort.

1 Like