Downward API is not showing what it is supposed to show

Cluster information:

Kubernetes version:
Cloud being used: (put bare-metal if not on a public cloud) AWS
Installation method: kOps
Host OS: Ubuntu 22.04 LTS
CNI and version: NA
CRI and version: Containerd

I am facing a particular problem with downward API. I am trying to fetch some node metadata into pods by using downward API into deployment spec. Let me show how I am reproducing this:

I am creating this nginx deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80

Which produces the desired result and the pod. Then I am trying to fetch the node name in the pod. So I am modifying the deployment spec with:

spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
        env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName

Now I apply this manifest.

In the pod, I see this output:

Containers:
  nginx:
    Container ID:   containerd://7e869f4e79f1b4ed22e66c3f3de3b66ee9826d56bddf388d8ea5aac631e2c9b1
    Image:          nginx:latest
    Image ID:       docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 07 Nov 2024 12:25:02 +0530
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  100m
    Environment:
      NODE_NAME:   (v1:spec.nodeName)

Note the NODE_NAME: (v1:spec.nodeName).

I see in deployment -o yaml output that apiVersion: v1 is being added into the spec. Is that expected? kubectl explain tells me it is expected but I don’t see anything like that in documentation and blog articles.

My end goal is to mutate the pod with kyverno and then bring some node metadata and AZ information as env/var into the pod. But I am facing the same problem there and tried to reproduce this without involving kyverno.

Thanks,

(v1:spec.nodeName) is what kubectl describe shows because the node name is substituted “just in time” by kubelet.

From inside the pod it should be right:

$ k --context=diy get pod hostnames-78477fc558-gh8l4 -o yaml | grep -A5 env
  - env:
    - name: NODE_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: spec.nodeName

$ k --context=diy describe pod hostnames-78477fc558-gh8l4 | grep -A1 Env
    Environment:
      NODE_NAME:   (v1:spec.nodeName)

$ k --context=diy exec -ti hostnames-78477fc558-gh8l4 -c serve -- sh -c 'echo $NODE_NAME'
kubernetes-minion-group-mtwx