Kubectl describe nodes "Non-terminated pods" displays inaccurate requests and limits

Asking for help? Comment out what you need so we can get more information to help you!

Cluster information:

Kubernetes version: 1.33.2
Cloud being used: (put bare-metal if not on a public cloud) Azure
Installation method: AKS
Host OS: Ubuntu 22.04
CNI and version: Azure CNI
CRI and version: containerd

You can format your yaml by highlighting it and pressing Ctrl-Shift-C, it will make your output easier to read.

I have a question. Below, we can see a snippet of the output from kubectl describe nodes where the output shows the CPU and Memory Requests and Limits for a pod.

Non-terminated Pods:          (17 in total)
  Namespace                   Name                                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                                ------------  ----------  ---------------  -------------  ---
  newrelic-infrastructure     nri-bundle-newrelic-prometheus-agent-0              2 (25%)       4 (50%)     4Gi (14%)         4Gi (14%)       3d

And below we can see the actual requests and limits for the running containers in the pod:

limits:
  cpu: "2"
  ephemeral-storage: 2Gi
  memory: 2Gi
requests:
  cpu: "1"
  ephemeral-storage: 2Gi
  memory: 2Gi

Furthermore, below we can see identical requests and limits for an initContainer in the pod:

limits:
  cpu: "2"
  ephemeral-storage: 2Gi
  memory: 2Gi
requests:
  cpu: "1"
  ephemeral-storage: 2Gi
  memory: 2Gi

My question is: Why does the node show the Requests and Limits for this pod as the sum of the requests and limits values for both the init container and the container?

If I adjust the init container requests and limits to a smaller size, the values output by kubectl describe nodes for this pod is reduced by an equal amount.

It seems to me that the node output should only include the values for the running containers, and not already completed init containers, because the context around the data is that these are non-terminated pods; which implies to me that it should not factor in terminated containers within the non-terminated pod.

Is this working as designed, or something that I should report as a bug?