How to expose a non http cluster service with traefik on a custom port

I am new to microk8s (coming from the Docker world) and enabled the traefik ingress controller for microk8s. Using this controller I was able to expose my services via http and https to the rest of my network successfully. Now I want to expose a non-http service (redis in this case) on port 6379 and I can’t seem to find out how to do this.

For example, in a docker-compose.yaml I used the following syntax to expose port 22 of my Gitlab:

- "traefik.tcp.routers.gitlab_ssh.rule=HostSNI(`*`)"
- "traefik.tcp.routers.gitlab_ssh.entrypoints=ssh"
- "traefik.tcp.routers.gitlab_ssh.service=service_gitlab_ssh"
- "traefik.tcp.services.service_gitlab_ssh.loadbalancer.server.port=22"

so I would expect something similar doing this with traefik in microk8s.

For standard http ignress i used this yaml (here for cyberchef):

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  generation: 5
  name: cyberchef-ingress
  namespace: default
spec:
  ingressClassName: public
  rules:
    - host: cyberchef.microk8s.home.somewhere
      http:
        paths:
          - backend:
              service:
                name: cyberchef-svc
                port:
                  number: 80
            path: /
            pathType: Prefix
  tls:
    - hosts:
        - cyberchef.microk8s.home.somewhere
      secretName: wildcard-tls
status:
  loadBalancer:
    ingress:
      - ip: 127.0.0.1

which works perfectly.

But of course I cannot just replace port 80 with 6379 when doing this for redis as redis is no http protocol.

The traefik documentation at Routing Configuration for Traefik CRD - Traefik talks about the CRD traefik.containo.us/v1alpha1 but I don’t seem to have this in my cluster.

How do I in general expose a service on port X to the outside world with microk8s? Please bear in mind I am a beginner here!

kubectl -o yaml get svc micro-redis-master
apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: micro-redis
    meta.helm.sh/release-namespace: default
  creationTimestamp: "2022-06-09T13:05:10Z"
  labels:
    app.kubernetes.io/component: master
    app.kubernetes.io/instance: micro-redis
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: redis
    helm.sh/chart: redis-16.11.3
  name: micro-redis-master
  namespace: default
  resourceVersion: "31115746"
  selfLink: /api/v1/namespaces/default/services/micro-redis-master
  uid: 8361413b-3e03-4c97-a854-20215042cba1
spec:
  clusterIP: 10.152.183.178
  clusterIPs:
  - 10.152.183.178
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: tcp-redis
    port: 6379
    protocol: TCP
    targetPort: redis
  selector:
    app.kubernetes.io/component: master
    app.kubernetes.io/instance: micro-redis
    app.kubernetes.io/name: redis
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
 kubectl get svc
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
kubernetes                  ClusterIP   10.152.183.1     <none>        443/TCP    210d
whoami                      ClusterIP   10.152.183.147   <none>        80/TCP     209d
cyberchef-svc               ClusterIP   10.152.183.145   <none>        80/TCP     209d
kuard                       ClusterIP   10.152.183.21    <none>        8080/TCP   201d
mysql-1638205545-headless   ClusterIP   None             <none>        3306/TCP   191d
mysql-1638205545            ClusterIP   10.152.183.29    <none>        3306/TCP   191d
grafana                     ClusterIP   10.152.183.121   <none>        3000/TCP   133d
micro-redis-headless        ClusterIP   None             <none>        6379/TCP   52m
micro-redis-replicas        ClusterIP   10.152.183.216   <none>        6379/TCP   52m
micro-redis-master          ClusterIP   10.152.183.178   <none>        6379/TCP   52m
kubectl get -o yaml endpoints redis
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Endpoints","metadata":{"annotations":{},"name":"redis","namespace":"default"},"subsets":[{"addresses":[{"ip":"192.168.0.93"}],"ports":[{"name":"redis","port":6379}]}]}
  creationTimestamp: "2022-06-09T13:14:14Z"
  name: redis
  namespace: default
  resourceVersion: "31116961"
  selfLink: /api/v1/namespaces/default/endpoints/redis
  uid: f366b549-3eea-4489-babd-c715ac8124cd
subsets:
- addresses:
  - ip: 192.168.0.93
  ports:
  - name: redis
    port: 6379
    protocol: TCP

Hi cyberschlumpf:

Ingress can only expose HTTP and HTTPS connections; see Ingress | Kubernetes

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.

And later on the same page:

An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.

Best regards,

Xavi

OK, understood!

Could you kindly provide me a most simple example for NodePort for exposing port 6379 on the public IP of the host? The LoadBalancer variant doesn’t work for me, as the dynamic MAC advertising (IIRC) was a problem with ESXi where my microk8s Ubuntu runs on. Thanks a lot in advance!

Hi cyberschlumpf:

By default, NodePort :

allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767).

This range can be configured, but that’s not something you would do unless you have a reason to.

You can specify a particular port, but:

If you want a specific port number, you can specify a value in the nodePort field. (…) You also have to use a valid port number, one that’s inside the range configured for NodePort use.

Regarding the public IP of the node:

if you want to specify particular IP(s) to proxy the port, you can set the --nodeport-addresses flag for kube-proxy or the equivalent nodePortAddresses field of the kube-proxy configuration file to particular IP block(s).

And:

The default for --nodeport-addresses is an empty list. This means that kube-proxy should consider all available network interfaces for NodePort.

About the issue with LB, you may deploy your LB and configure your worker nodes as targets (statically):

Using a NodePort gives you the freedom to set up your own load balancing solution, to configure environments that are not fully supported by Kubernetes, or even to expose one or more nodes’ IPs directly.

Best regards,

Xavi

I’m not sure how I need to provide those commandline options within the YAML definition file for my services. Could you kindly provide a simple example which shows how to use the NodePort/NodePort-Adresses example as YAML? Is this something I need to specify within the service definition?

Hi:

I am sorry, but I am not able to provide a simple copy&paste solution; I am affraid that there is no simple copy & paste solution.

On the bright side, Kubernetes documentation is excellent; as you may have noticed, my replies contains links to the official K8s site or quotes from the linked pages with relevant information for your scenario (or what I guess that can be relevant, at least). I would recommend going to the source and read the linked pages (or at least), the designated sections, as they may provide context or information that may be relevant for you that I may have overlooked.

Changing the range of ports that the Kubernetes cluster uses to expose the services of type NodePort can’t be done from the Service Definition (each user may set a different range of ports!), so, althought the port range can be configured, it’s a cluster-wide modification (I am not sure if it can be changed after the cluster has been deployed).

This port range can be configured in the kube-proxy configuration, which is one of the components of the Control Plane running in each node… As I’ve said, I don’t even know if it can be changed once the cluster is up and running…

I’ve Google’d a little bit about how to deploy Redis on Kubernetes, but everything I’ve found deploys the Redis database internally, consumed from pods deployed in the same cluster as the database.

It may be possible to configure the Kubernetes cluster to play nice with “public-Redis”, but at this point I would reconsider using Kubernetes for exposing Redis “publicly” and go for an alternative approach (Docker, maybe Docker Swarm or good old VMs).

Depending on your requirements, it may be even be simpler to deploy the application using the Redis database inside the cluster.

Either way, it’s not going to be a simple copy& paste solution.

Best regards,

Xavi

Thanks, I’ll give that a try. I still don’t fully understand how Traefik is integrated into microk8s at the moment. Apparently, what’s not present by default is the Traefik Kubernetes CRD so that the apis traefik.containo.us/… are available to be used. I am not sure if those are optional or are present in a non micro-k8s. So for example what I cannot define are objects of type IngressRouteTCP, that look pretty much like what I did with docker.

I copy/pasted the CRDs from the traefik webpage and applied them and they are now available.
I also added this one here and it applied successfully:

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
  name: redis
  namespace: default

spec:
  entryPoints:
    - redis

  routes:
  - match: HostSNI(`*`)
    services:
    - name: redis
      port: 6379

So all I need now is how to define an “EntryPoint” and there does not seem to be an object for this. All I found are “Endpoints” and this seems to be a different thing.

It may be possible to configure the Kubernetes cluster to play nice with “public-Redis”, but at this point I would reconsider using Kubernetes for exposing Redis “publicly” and go for an alternative approach (Docker, maybe Docker Swarm or good old VMs).

Depending on your requirements, it may be even be simpler to deploy the application using the Redis database inside the cluster.

This is right now not really an option for multiple reasons:

a) I’d like to play around to mimic real life scenarios. If you have an established application running on system X you often don’t want to change this completly just because k8s is available. Think for example augmenting an existing application with a log collection solution.

In my case, b) I tried to provide a redis database for a PCP instance running on one of my services (performance co pilot suite) to save all the measurements. There is no use in running the whole PCP within the k8s, as I want to run it on the system I would like to take the measurements from and just store the data elsewhere (in the redis).

c) redis is used as an example only. Think of anything I’d like to expose directly. For example, for allowing to git push via ssh into a gitlab, you would need to expose port 22 to the outside of the k8s cluster (this is what I did in the docker environment via the extracts given in my initial post)

Thanks!

What I found is this:

ubuntu@ubuntu-server:~/export/ingress.networking.k8s.io$ kubectl get pods -n traefik
NAME                               READY   STATUS    RESTARTS       AGE
traefik-ingress-controller-5zd7v   1/1     Running   17 (11d ago)   181d
ubuntu@ubuntu-server:~/export/ingress.networking.k8s.io$ kubectl describe pods -n traefik traefik-ingress-controller-5zd7v
Name:         traefik-ingress-controller-5zd7v
Namespace:    traefik
Priority:     0
Node:         ubuntu-server/192.168.0.93
Start Time:   Fri, 10 Dec 2021 15:32:32 +0000
Labels:       controller-revision-hash=5b7db6ccc6
              k8s-app=traefik-ingress-lb
              name=traefik-ingress-lb
              pod-template-generation=1
Annotations:  <none>
Status:       Running
IP:           192.168.0.93
IPs:
  IP:           192.168.0.93
Controlled By:  DaemonSet/traefik-ingress-controller
Containers:
  traefik-ingress-lb:
    Container ID:  containerd://ab6f2f381e581f77c8d3731a2700ca566d20970e18dead7a644e3c0578469f98
    Image:         traefik:2.3
    Image ID:      docker.io/library/traefik@sha256:0181e35c5af98f7f30fb391f91a6dbd281a90d7cf971e9909e26afd4ea923251
    Port:          8080/TCP
    Host Port:     8080/TCP
    Args:
      --providers.kubernetesingress=true
      --providers.kubernetesingress.ingressendpoint.ip=127.0.0.1
      --log=true
      --log.level=INFO
      --accesslog=true
      --accesslog.filepath=/dev/stdout
      --accesslog.format=json
      --entrypoints.web.address=:8080
      --entrypoints.websecure.address=:8443
    State:          Running
      Started:      Sun, 29 May 2022 13:28:08 +0000
    Last State:     Terminated
      Reason:       Unknown
      Exit Code:    255
      Started:      Thu, 19 May 2022 14:11:36 +0000
      Finished:     Sun, 29 May 2022 13:27:41 +0000
    Ready:          True
    Restart Count:  17
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2w2dd (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kube-api-access-2w2dd:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 :NoSchedule op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:                      <none>

So it’s clear that two endpoints (web, websecure) are defined as a start(?) argument for the traefik pod. (–entrypoints.web.address=:8080, --entrypoints.websecure.address=:8443). The question is: where is this defined? How can I change this? All I did to enable traefik in microk8s is “microk8s enable traefik” and magic happened… How can I influence the configuration of traefik itself? Also I don’t get why I can access my services via standard port 80/443 instead of port 8080 and 8443?

I solved my own problem. Here is my writeup:


Redis expose with traefik
Preview md in VS Code with CTRL+K,V

“Install” the CRD (?):

Copy/Paste the yaml from here:

and

write to a yaml file and apply via

kubectl apply -f <file>

It should now generate necessary entries:

ubuntu@ubuntu-server:~$ kubectl get crd | grep traefik
ingressroutes.traefik.containo.us                     2022-06-09T14:01:32Z
ingressroutetcps.traefik.containo.us                  2022-06-09T14:01:33Z
ingressrouteudps.traefik.containo.us                  2022-06-09T14:01:33Z
middlewares.traefik.containo.us                       2022-06-09T14:01:33Z
middlewaretcps.traefik.containo.us                    2022-06-09T14:01:33Z
serverstransports.traefik.containo.us                 2022-06-09T14:01:33Z
tlsoptions.traefik.containo.us                        2022-06-09T14:01:33Z
tlsstores.traefik.containo.us                         2022-06-09T14:01:33Z
traefikservices.traefik.containo.us                   2022-06-09T14:01:33Z

Install redis via helm

(could be any service of course that you want to expose to the outside)

microk8s helm repo add bitnami https://charts.bitnami.com/bitnami
microk8s helm3 install micro-redis bitnami/redis

(wait…)

this will create 3 services for redis listenting on port 6379

kubectl get svc
...
micro-redis-headless        ClusterIP      None             <none>        6379/TCP         23h
micro-redis-replicas        ClusterIP      10.152.183.216   <none>        6379/TCP         23h
micro-redis-master          ClusterIP      10.152.183.178   <none>        6379/TCP         23h
...

We need the master service for external interaction

** Configuring traefik **

If traefik is not enabled yet, install it via

microk8s enable traefik

All the relevant stuff is in namespace traefik, e.g. kubectl get -n traefik pods (ingress, services, daemonset, …)

First we need to create another entry point for traefik on port 6379. For this we need to edit the daemonset

kubectl edit -n traefik daemonset traefik-ingress-controller

and add the following lines to the spec->container->args:

args:

...
        - --providers.kubernetescrd=true
        - --providers.kubernetescrd.allowCrossNamespace=true
        - --entrypoints.redis.address=:6379
...

We DEFINE the entrypoint by just naming it redis above. We could have also named it foobar - there is no direct correlation to the redis services we run. It’s JUST the port

Save

→ This will create a new entrypoint in traffic for port 6379 and lets traefik talk via kubernetescrd to the cluster (at least that’s what I understood…)

Create an IngressRouteTCP object for our traffic:

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
  name: redis
  namespace: default
spec:
  entryPoints:
  - redis
  routes:
  - match: HostSNI(`*`)
    services:
    - name: micro-redis-master
      port: 6379

Note here: entrypoints → redis (we just defined that), services → micro-redis-master (our service, listenting on port 6379).
Also note that we need the match: HostSNI(`*`) in order to work

Not sure if the following was autogenerated by some failed attempt of mine.

Apparently another service will also come up!

traefik-redis               LoadBalancer   10.152.183.154   <pending>     6379:30263/TCP   4h57m

which I didnt write:

ubuntu@ubuntu-server:~$ kubectl get -o yaml svc traefik-redis
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"traefik-redis","namespace":"default"},"spec":{"ports":[{"name":"redis","port":6379,"protocol":"TCP","targetPort":6379}],"selector":{"app":"traefik"},"type":"LoadBalancer"}}
  creationTimestamp: "2022-06-10T07:57:24Z"
  name: traefik-redis
  namespace: default
  resourceVersion: "31246489"
  selfLink: /api/v1/namespaces/default/services/traefik-redis
  uid: e754e107-443a-4a91-b4c3-1c2ee3172d3d
spec:
  allocateLoadBalancerNodePorts: true
  clusterIP: 10.152.183.154
  clusterIPs:
  - 10.152.183.154
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: redis
    nodePort: 30263
    port: 6379
    protocol: TCP
    targetPort: 6379
  selector:
    app: traefik
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer: {}

Watch the log of the traefik pod via

kubectl logs -n traefik traefik-ingress-controller-rqgr5

I have lots of errors in the log:

E0610 12:52:49.094091       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125: Failed to list *v1beta1.Ingress: the server could not find the requested resource (get ingresses.extensions)
E0610 12:53:46.907435       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125: Failed to list *v1beta1.Ingress: the server could not find the requested resource (get ingresses.extensions)
E0610 12:54:45.593132       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125: Failed to list *v1beta1.Ingress: the server could not find the requested resource (get ingresses.extensions)
E0610 12:55:32.514583       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125: Failed to list *v1beta1.Ingress: the server could not find the requested resource (get ingresses.extensions)
E0610 12:56:19.071260       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125: Failed to list *v1beta1.Ingress: the server could not find the requested resource (get ingresses.extensions)
E0610 12:57:01.190803       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125: Failed to list *v1beta1.Ingress: the server could not find the requested resource (get ingresses.extensions)
time="2022-06-10T12:57:13Z" level=error msg="Cannot create service: service not found" serviceName=whoamiudp namespace=default providerName=kubernetescrd ingress=ingressrouteudp.crd servicePort=8080
time="2022-06-10T12:57:13Z" level=error msg="Cannot create service: service not found" providerName=kubernetescrd ingress=ingressrouteudp.crd serviceName=whoamiudp servicePort=8080 namespace=default
time="2022-06-10T12:57:13Z" level=error msg="Cannot create service: service not found" providerName=kubernetescrd ingress=ingressrouteudp.crd namespace=default serviceName=whoamiudp servicePort=8080
time="2022-06-10T12:57:13Z" level=error msg="Cannot create service: service not found" serviceName=whoamiudp servicePort=8080 providerName=kubernetescrd ingress=ingressrouteudp.crd namespace=default
time="2022-06-10T12:57:13Z" level=error msg="Cannot create service: service not found" providerName=kubernetescrd ingress=ingressrouteudp.crd serviceName=whoamiudp servicePort=8080 namespace=default
time="2022-06-10T12:57:13Z" level=error msg="Cannot create service: service not found" namespace=default serviceName=whoamiudp servicePort=8080 providerName=kubernetescrd ingress=ingressrouteudp.crd
time="2022-06-10T12:57:13Z" level=error msg="Cannot create service: service not found" serviceName=whoamiudp servicePort=8080 providerName=kubernetescrd ingress=ingressrouteudp.crd namespace=default
time="2022-06-10T12:57:13Z" level=error msg="Cannot create service: service not found" ingress=ingressrouteudp.crd namespace=default serviceName=whoamiudp servicePort=8080 providerName=kubernetescrd
E0610 12:57:35.114419       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125: Failed to list *v1beta1.Ingress: the server could not find the requested resource (get ingresses.extensions)
E0610 12:58:34.697126       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125: Failed to list *v1beta1.Ingress: the server could not find the requested resource (get ingresses.extensions)

I presume they are from the “resources yaml file” from:

which I earlier on also created.

I had to manually find and remove all those resources via kubectl delete
e.g.

kubectl delete ingressrouteudp ingressrouteudp.crd

It is here where I spotted an error saying a “service redis” was not found’. This pointed to the IngressRouteTCP definition where I initially only had “redis” as the service name and not “micro-redis-master”