Allocatable and Capacitiy resources like nvidia.com/gpu are always same

Cluster information:
Client Version: v1.32.1
Kustomize Version: v5.5.0
Server Version: v1.32.1
Kubernetes version: v1.32.1
Cloud being used: bare-metal
Installation method: kubeadm
Host OS: Ubuntu 22.04.5 LTS

When I used the command “kubectl describe node ”, I noticed the following output:

Capacity:
  cpu:                128
  ephemeral-storage:  459850824Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             528009664Ki
  nvidia.com/gpu:     8
  pods:               110
Allocatable:
  cpu:                128
  ephemeral-storage:  423798518697
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             527907264Ki
  nvidia.com/gpu:     8
  pods:               110

Acually I have used 5 gpus as showed in the end of the decribe node output:

Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                1850m (1%)  2 (1%)
  memory             368Mi (0%)  1364Mi (0%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
  nvidia.com/gpu     6           6

My question is why the value of Allocatable nvidia.com/gpu is not calculated like:
Capacity gpus minus Allocated gpus?
So in my case the allocable nvidia.com/gpu should be 2(8-6=2)

I have read the doc

https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/

But it didn’t help to the question*-*

I have read the doc

https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/

But it didn’t help to the question*-*