Why can't I/O bandwidth of Local Persistent Volume in kubernetes be adjusted to io.max of cgroup v2?

Cluster information:

Kubernetes version: 1.20.11
Cloud being used: bare-metal
Host OS: Ubuntu 18.04.6 LTS

In the case of Docker containers, I/O bandwidth of the volume used could be adjusted by changing the rbps and wbps values ​​of io.max. Local Persistent Volume also uses the node’s device, but I’m curious about why the device’s I/O bandwidth is not controlled by the cgroup.

In the case of Docker, the command is entered as follows.

$ pwd
/sys/fs/cgroup/system.slice/docker-08733e29cf874cb159dc1e42f87286ecb38f6caedf523c784f276719e79c07ca.scope

$ sudo sh -c "echo '259:0 wbps=2097152' > io.max"

$ cat io.stat
259:0 rbytes=0 wbytes=763287830528 rios=0 wios=2471104 dbytes=0 dios=0

In case of Kubernetes, I entered the following command.

$ pwd
/sys/fs/cgroup/kubepods/besteffort/pod9b5be446-332f-4f3b-8ca4-433cca64981b

$ sudo sh -c "echo '259:0 wbps=2097152' > io.max"

$ cat io.stat

In both cases I/O was enabled on cgroup.subtree_control.