Nginx-ingress-controller no access to all nodes

Cluster information:

Kubernetes version: 1.27.1
Cloud being used: bare-metal
Installation method: manual
Host OS: Debian 11
CNI and version:
CRI and version:

i am stuck and can’t get any further.

I’ve setup a kubernetes environment many times - each time with different settings - but I can not get it to work.

Scenario:

  • i have two bare-metal server, each running with Debian 11
  • Server A is the k8s master, server B is a worker node
  • i have a nginx-pod on the master and i have a nginx-pod on the worker node successfully running
  • i have a service for each pod - nginx-a-svc and nginx-b-svc. (ClusterIP)
  • installed nginx-ingress-controller via helm.

When i deploy an ingrees, it can only access the service on the master and not the service on the worker node.

i have no idea where to start to solve the problem.

when i use port-forward, the service seems to work.

Thx for any help!

Hey ChrizK,

What exactly is your goal? Do you want the load balancer to distribute your incoming request over both nginx-a-svc and nginx-b-svc? In that case why did you create two services? And place multiple pods in that one service.

I’m just guessing that is what you are trying to do. Maybe you could explain in more depth what your end goal is. I think it would also be helpful if you post some of you configuration files here, especially the one for the ingress.

I hope this helps you!

Mike