Run
kubectl get svc -n kube-system
You are going to see something similar to this (this is an example of an output in my case):
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 107m
traefik-ingress-controller NodePort 10.105.27.208 <none> 80:31308/TCP,8080:30815/TCP 2s
We see that the traefik-ingress-controller
service is becoming available on every node at port 31308
– the port number will be different in your cluster . So the external IP is the IP of any node of our cluster. You should now be able to access Traefik on port 80 of your Minikube cluster by requesting for port 31308:
$ curl $(minikube ip):31308
404 page not found
Note : We expect to see a 404 response here as we haven’t yet given Traefik any configuration.
The last step would be to create a Service and an Ingress that will expose the Traefik Web UI. From now on you can actually use the official Traefik documentation:
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/ui.yaml
Now lets setup an entry in our /etc/hosts
file to route traefik-ui.minikube
to our cluster.
In production you would want to set up real DNS entries. You can get the IP address of your minikube instance by running minikube ip:
echo "$(minikube ip) traefik-ui.minikube" | sudo tee -a /etc/hosts
We should now be able to visit traefik-ui.minikube:<NODEPORT>
in the browser and view the Traefik web UI. In your case, if you want to visit your server (at TCP 80
) you need to provide a load-balancer (there are many choices: HA-Proxy or nginx are the most popular) to redirect this traffic to traefik-ui.minikube:<NODEPORT>
. Using just one single-node cluster this doesn’t make much sense, but if you had a cluster made with kubeadm (using multiple nodes) then you could see the benefit of it.
An example of setting up an NGINX LB in a multi-node cluster, would be:
cat /etc/nginx/nginx.conf
load_module '/usr/lib64/nginx/modules/ngx_stream_module.so';
events {
worker_connections 1024;
}
stream {
upstream stream_backend {
# server <IP_ADDRESS_OF_K8S_NODE>:<TRAEFIK_NODEPORT>;
server worker-0.hostname.com:31380;
server worker-1.hostname.com:31380;
server worker-2.hostname.com:31380;
}
server {
listen yourpublicserverip:80;
proxy_pass stream_backend;
}
}
Note: Replace yourpublicserverip and worker-{0,1,2}.hostname.com with your values
As a result, any visitor to yourpublicserverip will end-up to your ingress, which then will route the traffic to the appropriate microservice