Ingress controller response header

Cluster information:

Kubernetes version:
Cloud being used: AWS
Installation method:
Host OS (the client):
Linux Ubuntu 22.04

Hello,
i would like to use a Kubernetes ingress controller and from an external client be able to see if for example serviceB has more errors than the current serviceA.
For this I add two ingresses with one (ingressB) with the following configuration:

metadata:
name: canary
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers “Canary: true”;
ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/canary: “true”
nginx.ingress.kubernetes.io/canary-weight: “50”

And I would like from a client to llop for a while and calculate errors with a curl command to the ingress controller:

curl -iH “Host: hostOfApp” AWS_ENDPOINT:80;

I have try to follow this example:
kubernetes.github.io/ingress-nginx/examples/customization/custom-headers/
And it works to allow me to get a customized headers in the response.
But I would like to have a dynamic header allowing from the client perspective to tell if ingressA or ingressB was used and in the end serviceA or serviceB.

I do not have any Canary Header in the responses corresponding to ingressB (serviceB)

Has anyone an idea of how to do this ?

Hi,
It seems that nginx does not populate header value correctly. It always returns name of ingress A/service A. One of the options would be to return upstream address to check where request landed.

It seems there is a way to get some info after all…
Below lines should be include in ingress cm

data:
  allow-snippet-annotations: "true"
  http-snippet: |
    map $proxy_alternative_upstream_name $upstream_service_name {
      "~^\w+-(.*)-\d+?" "$proxy_alternative_upstream_name";
      default "$proxy_upstream_name";
    }
  ...

and below to the ingress resource

  kind: Ingress
  metadata:
    annotations:
      nginx.ingress.kubernetes.io/configuration-snippet: |
        more_set_headers 'upstream_addr: $upstream_addr';
        more_set_headers 'upstream_service_name: $upstream_service_name';
    ...

upstream_service_name header will include name of k8s service that received request.
upstream_addr header will include ip of the pod that sent response.

Actually I am not sure it corresponds to what I am looking for.
Let me rephrase my concern:
the idea is to use nginx.ingress.kubernetes.io/canary-weight annotation. In the end the dispatch is done by the Nginx Ingress Controller and is not managed by the client.
From the snippets of code it is not clear to me if you tested it with this canary annotation.
From the documentation
ingress-nginx/docs/user-guide/nginx-configuration/annotations.md at main · kubernetes/ingress-nginx · GitHub
I see that:

Note that when you mark an ingress as canary, then all the other non-canary annotations   
will be ignored (inherited from the corresponding main ingress) except    
 nginx.ingress.kubernetes.io/load-balance,   
nginx.ingress.kubernetes.io/upstream-hash-by,  
 and annotations related to session affinity.   
If you want to restore the original behavior of canaries when session  
affinity was ignored, set nginx.ingress.kubernetes.io/affinity-canary-behavior   
annotation with value legacy on the canary ingress definition.

From my understanding you can not use at the same time the following annotations:

Yes, this has been tested with canary:

apiVersion: v1
data:
  allow-backend-server-header: "true"
  allow-snippet-annotations: "true"
  http-snippet: |
    map $proxy_alternative_upstream_name $upstream_service_name {
      "~^\w+-(.*)-\d+?" "$proxy_alternative_upstream_name";
      default "$proxy_upstream_name";
    }
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: ingress-nginx
    meta.helm.sh/release-namespace: ingress
  name: ingress-nginx-controller
  namespace: ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/configuration-snippet: |
      more_set_headers 'upstream_addr: $upstream_addr';
      more_set_headers 'upstream_service_name: $upstream_service_name';
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: s1-ingress
  namespace: default
spec:
  ingressClassName: nginx
  rules:
  - host: secure-ingress.com
    http:
      paths:
      - backend:
          service:
            name: service1
            port:
              number: 80
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - secure-ingress.com
    secretName: secure-ingress-san
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "50"
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: s2-ingress
  namespace: default
spec:
  ingressClassName: nginx
  rules:
  - host: secure-ingress.com
    http:
      paths:
      - backend:
          service:
            name: service2
            port:
              number: 80
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - secure-ingress.com
    secretName: secure-ingress-san

Basic test

# curl --resolve secure-ingress.com:443:172.20.172.2 https://secure-ingress.com -k  -sS -D - -o /dev/null | grep upstream_service_name
upstream_service_name: default-service2-80
# curl --resolve secure-ingress.com:443:172.20.172.2 https://secure-ingress.com -k  -sS -D - -o /dev/null | grep upstream_service_name
upstream_service_name: default-service1-80
# curl --resolve secure-ingress.com:443:172.20.172.2 https://secure-ingress.com -k  -sS -D - -o /dev/null | grep upstream_service_name
upstream_service_name: default-service1-80
# curl --resolve secure-ingress.com:443:172.20.172.2 https://secure-ingress.com -k  -sS -D - -o /dev/null | grep upstream_service_name
upstream_service_name: default-service1-80
# curl --resolve secure-ingress.com:443:172.20.172.2 https://secure-ingress.com -k  -sS -D - -o /dev/null | grep upstream_service_name
upstream_service_name: default-service1-80
# curl --resolve secure-ingress.com:443:172.20.172.2 https://secure-ingress.com -k  -sS -D - -o /dev/null | grep upstream_service_name
upstream_service_name: default-service1-80
# curl --resolve secure-ingress.com:443:172.20.172.2 https://secure-ingress.com -k  -sS -D - -o /dev/null | grep upstream_service_name
upstream_service_name: default-service1-80
# curl --resolve secure-ingress.com:443:172.20.172.2 https://secure-ingress.com -k  -sS -D - -o /dev/null | grep upstream_service_name
upstream_service_name: default-service2-80
# curl --resolve secure-ingress.com:443:172.20.172.2 https://secure-ingress.com -k  -sS -D - -o /dev/null | grep upstream_service_name
upstream_service_name: default-service1-80
# curl --resolve secure-ingress.com:443:172.20.172.2 https://secure-ingress.com -k  -sS -D - -o /dev/null | grep upstream_service_name
upstream_service_name: default-service2-80
# curl --resolve secure-ingress.com:443:172.20.172.2 https://secure-ingress.com -k  -sS -D - -o /dev/null | grep upstream_service_name
upstream_service_name: default-service1-80

Noice! Thanks a lot @fox-md it works
For completeness I just had to modify directly the Ingress controller yaml that I downloaded locally because the configmap was not pickup if it was declared with Deployment and service. So I modified the default ConfigMap of the Ingress Nginx Controller.
I had to delete the pod of the Ingress Controller as well after aplying the new configuration-snippet.