Why my apiserver QPS so high but everything is OK?

My cluster’s apiserver seems handling millions of request per-second, what’s the wrong?
promql:

topk(10, sum(irate(apiserver_request_total[5m]))without(component, job))

response:

{code="200", contentType="application/vnd.kubernetes.protobuf", resource="persistentvolumes", scope="cluster", verb="GET", version="v1"} 11731218.203982122
{code="200", contentType="application/vnd.kubernetes.protobuf", resource="persistentvolumeclaims", scope="namespace", verb="GET", version="v1"} 11639428.28118651
{code="200", contentType="application/json", group="argoproj.io", resource="applications", scope="namespace", verb="PATCH", version="v1alpha1"} 11479909.256607212
{code="200", contentType="application/json", resource="configmaps", scope="namespace", verb="GET", version="v1"} 9541565.014221862
{code="200", contentType="application/json", resource="pods", scope="namespace", verb="LIST", version="v1"} 9079504.063388867
{code="200", contentType="application/json", group="apps", resource="deployments", scope="namespace", verb="LIST", version="v1"} 8947495.73344169
{code="200", contentType="application/vnd.kubernetes.protobuf", group="coordination.k8s.io", resource="leases", scope="namespace", verb="GET", version="v1"} 6161645.875660301
{code="200", contentType="application/json", group="apps", resource="statefulsets", scope="namespace", verb="LIST", version="v1"} 5423756.281734149
{code="200", contentType="application/json", group="apps", resource="daemonsets", scope="namespace", verb="LIST", version="v1"} 5423471.422991798
{code="0", contentType="application/vnd.kubernetes.protobuf;stream=watch", resource="secrets", scope="namespace", verb="WATCH", version="v1"} 4798630.841121496

As you see, the QPS is more than 10 millions. It’s unbelievable!
There’s only 21 nodes(3 of them is master, means 3 apiserver replicas) and 2000+ pods in my cluster.
And every apiserver replicas user no more than 1 core CPU and 4Gi memory. They shouldn’t hold so much request, who can tell me why?

$ kubectl get node | wc -l
21
$ kubectl get pod --all-namespaces | wc -l
2337

Cluster information:

Kubernetes version: v1.18.12
Cloud being used: bare-metal
Installation method: kubeadm
Host OS: Ubuntu 20.04.2 LTS
CNI and version: flannel 0.11.0
CRI and version: containerd 1.2.6

Can anyone help me?

I found source code at: https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go#L495-L530
But it seems OK.