Traefik ingress not work

Hi i create my first proxy reverse in my life, hehehe
I’m using a default configuration in site with DaemonSet

So I changed my /etc/hosts for to use de host

When I try access in my browser http://traefik-ui.minikube/
I have connection refused

In My server, doesn’t exist a service in 80 port

1 Like


kubectl get svc -n kube-system

You are going to see something similar to this (this is an example of an output in my case):

NAME                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                       AGE
kube-dns                     ClusterIP      <none>        53/UDP,53/TCP                 107m
traefik-ingress-controller   NodePort   <none>        80:31308/TCP,8080:30815/TCP   2s

We see that the traefik-ingress-controller service is becoming available on every node at port 31308the port number will be different in your cluster . So the external IP is the IP of any node of our cluster. You should now be able to access Traefik on port 80 of your Minikube cluster by requesting for port 31308:

$ curl $(minikube ip):31308

404 page not found

Note : We expect to see a 404 response here as we haven’t yet given Traefik any configuration.

The last step would be to create a Service and an Ingress that will expose the Traefik Web UI. From now on you can actually use the official Traefik documentation:

kubectl apply -f

Now lets setup an entry in our /etc/hosts file to route traefik-ui.minikube to our cluster.

In production you would want to set up real DNS entries. You can get the IP address of your minikube instance by running minikube ip:

echo "$(minikube ip) traefik-ui.minikube" | sudo tee -a /etc/hosts

We should now be able to visit traefik-ui.minikube:<NODEPORT> in the browser and view the Traefik web UI. In your case, if you want to visit your server (at TCP 80) you need to provide a load-balancer (there are many choices: HA-Proxy or nginx are the most popular) to redirect this traffic to traefik-ui.minikube:<NODEPORT>. Using just one single-node cluster this doesn’t make much sense, but if you had a cluster made with kubeadm (using multiple nodes) then you could see the benefit of it.

An example of setting up an NGINX LB in a multi-node cluster, would be:

cat /etc/nginx/nginx.conf

load_module '/usr/lib64/nginx/modules/';
events {
     worker_connections  1024;

stream {
     upstream stream_backend {

   server {
     listen yourpublicserverip:80;
     proxy_pass stream_backend;

Note: Replace yourpublicserverip and worker-{0,1,2} with your values

As a result, any visitor to yourpublicserverip will end-up to your ingress, which then will route the traffic to the appropriate microservice :wink:

1 Like