Two services on the same deployment and port (URL hostname routing on a default Laravel application)

Hello,

I have a question about a practice I’m doing to learn all the concepts about Kubernetes,

I have a Laravel container responding to two hostnames, let’s say

app dot domain dot com and api dot domain dot com

Laravel manages this via URL hostname matching against the URLs you define in APP_URL and API_URL in your .env file (dotnet), and all the request go thru the 80 port to the container, so only a single port listening in the container, the URL determines the kind of request automatically (either web or api).

This works well in local, and it seems to be default laravel behaviour afaik.

So I’ve deployed this container in a K8s “deployment” with containerPort=80.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
  labels:
    app: app
    tier: backend
    track: latest
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app
      tier: backend
      track: latest
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: app
        tier: backend
        track: latest
    spec:
      containers:
        - image: (ommited)
          imagePullPolicy: IfNotPresent
          name: app
          resources: {}
          ports:
            - name: http
              containerPort: 80

Then, I created 2 services, one named app pointing to container:80 from app:80, and the other one also pointing to container:80 from api:90. I think it’s easier to see on the yaml

This is the first service, attending the requests for app dot domain dot com (see the ingress below)

apiVersion: v1
kind: Service
metadata:
  name: app
  namespace: default
spec:
  ports:
    - name: http
      protocol: TCP
      port: 80        # External exposed port
      targetPort: 80  # Internal port
  selector:
    app: app
    tier: backend
    track: latest
  type: NodePort

And the second one attending the requests for api dot domain dot com:90 (see the ingress below)

apiVersion: v1
kind: Service
metadata:
  name: api
  namespace: default
spec:
  ports:
    - name: http
      protocol: TCP
      port: 90        # External exposed port
      targetPort: 80  # Internal port
  selector:
    app: app
    tier: backend
    track: latest
  type: NodePort

I can access both of them fine from the internal cluster network and see the different responses depending on the hostname used. I’ve successfuly set up a pod (dnsutils) and edited its internal /etc/hosts and I’ve been successful to connect to both

app dot domain dot com

and

api dot domain dot com semicolon 90

and got a successful response in both cases, and not only a response, the returned content matched either the website or the api responses I expected, so all good so far.

Then I created 2 ingresses, one for each service, making them point to each of the services using the hostname and servicePort I previously defined in the services, so:

Ingress for the app:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: app-ingress-backend
  namespace: default
  labels:
    app: app
    tier: backend
    track: latest
    site: web
  annotations:
    nginx.ingress.kubernetes.io/limit-connections: "8"

spec:
  rules:
  - host: app dot domain dot com
    http:
      paths:
      - backend:
          serviceName: app
          servicePort: 80

Ingress for the api:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: app-ingress-backend-api
  namespace: default
  labels:
    app: app
    tier: backend
    track: latest  
    site: api
  annotations:
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, DELETE, PATCH, HEAD, OPTIONS"
    nginx.ingress.kubernetes.io/cors-allow-origin: "http://app dot domain dot com"
    nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
    nginx.ingress.kubernetes.io/limit-connections: "8"

spec:
  rules:
  - host: api dot domain dot com
    http:
      paths:
      - backend:
          serviceName: api
          servicePort: 90

Then I started testing this from my host machine. My Cluster IP is 192.168.64.2 and I can resolve the names for both of the hostnames from it:

$ nslookup app dot domain dot com 192.168.64.2
Server:         192.168.64.2
Address:        192.168.64.2#53

Non-authoritative answer:
Name:   app dot domain dot com
Address: 192.168.64.2

$ nslookup api dot domain dot com 192.168.64.2
Server:         192.168.64.2
Address:        192.168.64.2#53

Non-authoritative answer:
Name:   api dot domain dot com
Address: 192.168.64.2

And here comes the thing: I CAN access to the app:80 service directly from my host machine with this setup, but I cannot see the api:90 service, connection is refused every single time.

I’m trying with

curl http://app dot domain dot com (OK)

and

curl http://api dot domain dot com:90 (REFUSED)

I also did a telnet to each of the ports and hosts, also to the clusterIP, to determine if the port is open to the world and I got this

telnet 192.168.64.2 80 (OK)

telnet 192.168.64.2 90 (REFUSED)

So the port 90 is NOT accessible, but I don’t know how to change this on the ingress or elsewhere in the cluster configuration. I am using Minikube on MacOS X, and I have enabled both “ingress” and “ingress-nginx” with the default configuration.

So, my question is: how can I access the api:90 service from outside the cluster?

I’ve unsuccessfully tried to get both services accessible from the same ingress by defining something like having a SINGLE ingress with both services defined via hosts, like so:

(so, modify the first of the ingress and have this spec section instead and remove the second one):

spec:
  rules:
  - host: app dot domain dot com
    http:
      paths:
      - backend:
          serviceName: app
          servicePort: 80
  - host: api dot domain dot com
    http:
      paths:
      - backend:
          serviceName: api
          servicePort: 90

but this also didn’t worked.either. :frowning:

Cluster information:

Kubernetes version:

minikube v1.6.2 on Darwin 10.15.3
? Automatically selected the ‘hyperkit’ driver (alternates: [virtualbox])
?? Creating hyperkit VM (CPUs=2, Memory=8192MB, Disk=20000MB) …
?? Preparing Kubernetes v1.17.0 on Docker ‘19.03.5’ …
?? Pulling images …
?? Launching Kubernetes …
? Waiting for cluster to come online …
?? Done! kubectl is now configured to use “minikube”
? ingress was successfully enabled
? ingress-dns was successfully enabled

Cloud being used: bare-metal (local to the host machine)

Installation method: brew install minikube

Host OS: MacOS X Catalina

CNI and version: ? (I don’t know how to get this)
CRI and version: ? (I don’t know how to get this)

I think that could also be useful. This is what I got for the different outputs of kubectl (aliased “k” here)

$ k get deployments
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
app       1/1     1            1           17h
mysql     1/1     1            1           17h

$ k get pods
NAME                       READY   STATUS    RESTARTS   AGE
app-7b9b86878f-sl8kw   1/1     Running   0          17h
mysql-5f96bbccd-mw4rf      1/1     Running   0          17h
redis-master-0             1/1     Running   0          17h

$ k get svc
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
api              NodePort    10.96.109.126   <none>        90:30745/TCP   14h
app              NodePort    10.96.58.210    <none>        80:30744/TCP   15h
mysql            ClusterIP   10.96.188.125   <none>        3306/TCP       17h
redis-headless   ClusterIP   None            <none>        6379/TCP       17h
redis-master     ClusterIP   10.96.136.174   <none>        6379/TCP       17h
redis-slave      ClusterIP   10.96.158.150   <none>        6379/TCP       17h

$ k get ep
NAME             ENDPOINTS         AGE
api              172.17.0.10:80    14h
app              172.17.0.10:80    15h
mysql            172.17.0.8:3306   17h
redis-headless   172.17.0.7:6379   17h
redis-master     172.17.0.7:6379   17h
redis-slave      <none>            17h

$ k get ingress
NAME                          HOSTS            ADDRESS        PORTS   AGE
app-ingress-backend       app.domain.com	   192.168.64.2   80      14h
app-ingress-backend-api   api.domain.com       192.168.64.2   80      12h

Should should be able to do this with a single ingress resource but specify multiple paths.
See the docs here for an example of setting a path
https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/

The idea would be /app goes to the app container and /api goes to the api container, or something similar.

Kind regards,
Stephen

Thanks for your reply!

I finally got it working, at least this part, and I did it by

a) moving to a single-service, single ingress. Everything goes thru port 80 and container is the only one with multiple flows.

b) Remove all the HOST: entries in the ingress, and that was the BIG one: let all the traffic flow to the container where it will be properly solved by matching the URL hostnames to either a normal WEB request or an API request.

This worked for me, and I have it fine.

Here is the merged ingress:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: app-ingress-backend
  namespace: default
  labels:
    app: app
    tier: backend
    track: latest
    site: web
  annotations:
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, DELETE, PATCH, HEAD, OPTIONS"
    nginx.ingress.kubernetes.io/cors-allow-origin: "http://app dot domain dot com"
    nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
    nginx.ingress.kubernetes.io/limit-connections: "8"
spec:
  rules:
  # so, NO "host:" entries here!
  - http:
      paths:
      - backend:
          serviceName: app
          servicePort: 80

And then remove the second service (port 90) and we’re done, if I recall it properly.

Hope it helps anyone else. :slight_smile: