Split kubernetes configuration file

Hi everyone,

I’m using ingress controller on my cluster. I use github devops actions to deploy on the cluster.
Mu only doubt is, how can I split the ingress configuration in multiple kubernetes deployment configuration? Right now I’m always replicating kubernetes ingress configuration in each github repository.

Thanks in advance

Cluster information:

Kubernetes version: 1.19.3
Cloud being used: digitalocean public cloud
Installation method: automated
Host OS: Linux
CNI and version:
CRI and version:

You can format your yaml by highlighting it and pressing Ctrl-Shift-C, it will make your output easier to read.

You shouldn’t have to replicate it - if you’re using a shared ingress controller, each app should only need its specific ingress config.

Then probably I’m doing something wrong because every time a repository deploys, the ingress controller is reset to the configuration defined on the repository (so, only one ingress path). I’m replicating the following configuration in each repository deployment yaml:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: nginx-ingress-ingress-nginx-controller
  annotations:
      kubernetes.io/ingress.class: nginx
      cert-manager.io/cluster-issuer: letsencrypt-prod
      service.beta.kubernetes.io/do-loadbalancer-protocol: https
      service.beta.kubernetes.io/do-loadbalancer-tls-passthrough: "true"
spec:
  tls:
  - hosts:
    - ***.***.org
    secretName: chain-change-tls
  rules:
  - host: ***.***.org
    http:
      paths:
      - backend:
          serviceName: photos-front-service
          servicePort: 34000

I have no idea how gitlab’s deploys work, but that should be okay.
Are they all in the same namespace and stomping on each other?

It’s github actions:

The workaround I’ve applied is to replicate every host and path rule in each repository that deploys to the kubernetes cluster in digitalocean in order to avoid override (if I only define the actual repository needed host and path, when that repository deploys the other ingress definitions already applied on the cluster are overridden/erased)

by namespace do you mean the kubernetes cluster or is it a definition on github’s?

This depends on how your ingress controller works, but you should only
need to define the hosts/paths for each deploy.

It is the job of the ingress controller to either provision a
load-balancer per Ingress instance (cloud IaaS style) or to merge
Ingress instances into a single config (nginx style). It’s possible
to merge into N configs, maybe per-namespace, but that’s not what most
controllers do today.

Which controller are you using?

Not sure about the controller because it’s the one provided by digitalocean kubernetes. But how can I split the hosts/paths on each deployment action?

If I understand your need, simply defining multiple Ingress resources should what you need, but I have no knowledge of how the DO ingress controller works.

Can you please share an example for defining multiple ingress resources? I will give it a try in DO

What I mean is literally 2 different Ingress resources. If the controller merges them (as in nginx impl) you can use the same hostname on both. If the controller provisions a different IP for each Ingress, then same-hostname can not work.

Is it a requirement that the different instances use the same hostname (e.g. example.com/foo and example.com/bar) or do they use different hosts (foo.example.com, bar.example.com) ?

Looking at the YAML you posted it’s specifying nginx, so you SHOULD be able to merge. But it’s also specifying DO annotations, so I really do not know. The simple case to try is something like:

apiVersion: [networking.k8s.io/v1beta1](http://networking.k8s.io/v1beta1)
kind: Ingress
metadata:
name: foo
spec:
rules:
- host: [example.com](http://example.com)
http:
paths:
- path: /foo
backend:
serviceName: foo-service
servicePort: 8080

I tried as you mentioned, but the ingress controller configuration was again updated to only include the host defined at the last run. The github action displayed that the ingress was configured:
image

I defined it as:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: nginx-ingress-ingress-nginx-controller
spec:
  tls:
  - hosts:
    - descobrir.org
    secretName: chain-change-tls
  rules:
  - host: descobrir.org
    http:
      paths:
      - backend:
          serviceName: descobrir-web-service
          servicePort: 36000

Any ideas?

The email-to-discuss interface is poor, and it looks like some of my responses got corrupted.

Can you show both YAMLs you loaded? Are they the same host or different hosts? What I can’t tell is if your ingress controller is broken or whether it is doing what you said, which is not the same as doing what you meant.

It looks like you are using the nginx controller? That’s pretty well debugged at this point, so it doesn’t seem likely that such a fundamental bug is still hiding. You may want to check its logs, too.

For example: if you do this, it should give you 2 hosts:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: foo
spec:
  rules:
  - host: foo.example.com
    http:
      paths:
      - backend:
          serviceName: foo-service
          servicePort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: bar
spec:
  rules:
  - host: bar.example.com
    http:
      paths:
      - backend:
          serviceName: bar-service
          servicePort: 8080

But if you do this, it should give you a single host with 2 paths:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: foo
spec:
  rules:
  - host: example.com
    http:
      paths:
      - path: /foo
        backend:
          serviceName: foo-service
          servicePort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: bar
spec:
  rules:
  - host: example.com
    http:
      paths:
      - path: /bar
        backend:
          serviceName: bar-service
          servicePort: 8080

from your example, do you mean that there should be a different ingress config? I can see one with an ingress metadata name as foo and another one as bar: for each service a new ingress should exist?

In general I would say no, one Ingress per-service is NOT what you want, if those services are related. But I understood from your earlier messages you wanted that?

Here’s what Ingress guarantees: There will be an IP address listening for the specified hosts and paths.

Ingress does not guarantee that any 2 hostnames will get different IPs or the same IP - that is left to the implementation. It does not guarantee that 2 ingresses which use the same hostname will get the same IP - that is left to the implementation.

If you know your implementation, you can make inferences about how it will react.

Maybe we should go back to the beginning.

Show me the YAML you are using with hosts and paths and namespaces specified.

First, thank you for your patience.

What I’m trying to do is to automate the deployment of a container to digitalocean kubernetes. The deployment itself is working correctly, it is just the ingress controller that I can’t split the configuration.
For the first github repository, I have the following inside YAML:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: nginx-ingress-ingress-nginx-controller
  annotations:
      kubernetes.io/ingress.class: nginx
      cert-manager.io/cluster-issuer: letsencrypt-prod
      service.beta.kubernetes.io/do-loadbalancer-protocol: https
      service.beta.kubernetes.io/do-loadbalancer-tls-passthrough: "true"
spec:
  tls:
  - hosts:
    - photos-api.descobrir.org
    secretName: chain-change-tls
  rules:
  - host: photos-api.descobrir.org
    http:
      paths:
      - backend:
          serviceName: photos-api-service
          servicePort: 32000

And for a second repository in github, I have something similar:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: nginx-ingress-ingress-nginx-controller
  annotations:
      kubernetes.io/ingress.class: nginx
      cert-manager.io/cluster-issuer: letsencrypt-prod
      service.beta.kubernetes.io/do-loadbalancer-protocol: https
      service.beta.kubernetes.io/do-loadbalancer-tls-passthrough: "true"
spec:
  tls:
  - hosts:
    - carregar.descobrir.org
    secretName: chain-change-tls
  rules:
  - host: carregar.descobrir.org
    http:
      paths:
      - backend:
          serviceName: photos-front-service
          servicePort: 34000

What is happening right now is, if I run the second repository deployment, the deployment of the first repository ingress controller config is replaced by the second deployment

Are these in the same Namespace? Both Ingresses have the same name: “nginx-ingress-ingress-nginx-controller” which means you are literally telling the system to replace one with the other.

You can either rename them (I’d suggest a more meaningful name :slight_smile: so you have 2 distinct Ingress objects or else create a single Ingress YAML that is independent of either repository.

Both ways should work on any Ingress implementation. On nginx, the 2 Ingress resources will get the same IP. On something like GCP or AWS they would get 2 different IPs, but they are using different hostnames, so it is OK.

Yes they are in the same namespace. I was trying to avoid creating multiple ingress resources.

So, in order to solve this, I have to create the ingress controller outside the repositories, but then how can I set the YAML per repository to use the already created ingress controller?

I was trying to avoid creating multiple ingress resources.

Why? You’re swimming against the stream.

There isn’t currently an “additive” resource, which I think is what you really want. The newer Gateway API, (which is very ALPHA!!) actually factors more cleanly for this sort of pattern. Hopefully that will be GA some time in 2021.

In the mean time, you have to decide:

a) The Ingress resources are “vertical” to each service repo

OR

b) The Ingress resource is “horizontal” across all the service repos

The last alternative might be to lean into a “patch” model, where each repo carries a patch against an Ingress instance. Rather than kubectl apply you would use kubectl patch. You’d have to deal with the “empty” ingress resource as a separate thing, and things like removing a patch when the repo gets deleted would be your responsibility. It sounds “fun” (as in “a good way to waste time”) to me, but not something I’d really recommend.

Great explanation. I wasn’t aware that the current version is not prepared to be like a Gateway API.

In that case, I will go with the vertical solution: 1 resource per repository because they are different containers.

Thank you so much for your great help