Why GKE node with high load average with no pods?

I have realised that load average of the VM node in GKE has a high OS load average.
The steps I did:
1.- Create a pool with 1 node, in muy GKE cloud:

gcloud container node-pools create testpool --cluster MYCLUSTER --num-nodes=1 --machine-type=n1-standard-1
NAME      MACHINE_TYPE   DISK_SIZE_GB  NODE_VERSION
testpool  n1-standard-1  100           1.14.10-gke.36

2.- Drain the node and check node status
kubectl drain --ignore-daemonsets gke-MYCLUSTER-testpool-a84f3036-16lr

kubectl get nodes
gke-MYCLUSTER-testpool-a84f3036-16lr     Ready,SchedulingDisabled   <none>   2m3s   v1.14.10-gke.36

3.- Restart machine, wait and top
gcloud compute ssh gke-MYCLUSTER-testpool-a84f3036-16lr
sudo reboot

gcloud compute ssh gke-MYCLUSTER-testpool-a84f3036-16lr
top

top - 11:46:34 up 3 min,  1 user,  load average: 1.24, 0.98, 0.44
Tasks: 104 total,   1 running, 103 sleeping,   0 stopped,   0 zombie
%Cpu(s):  3.1 us,  1.0 sy,  0.0 ni, 95.8 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :   3697.9 total,   2071.3 free,    492.8 used,   1133.9 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   2964.2 avail Mem

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
   1066 root      20   0  895804  99900  65136 S   2.1   2.6   0:04.28 kubelet
   1786 root      20   0  417288  74176  11660 S   2.1   2.0   0:03.13 ruby
   1009 root      20   0  812868  97168  26456 S   1.0   2.6   0:09.17 dockerd
      1 root      20   0   99184   6960   4920 S   0.0   0.2   0:02.25 systemd
      2 root      20   0       0      0      0 S   0.0   0.0   0:00.00 kthreadd
      3 root      20   0       0      0      0 I   0.0   0.0   0:00.00 kworker/0:0
      4 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 kworker/0:0H
      5 root      20   0       0      0      0 I   0.0   0.0   0:00.43 kworker/u2:0
      6 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 mm_percpu_wq
      7 root      20   0       0      0      0 S   0.0   0.0   0:00.08 ksoftirqd/0
      8 root      20   0       0      0      0 I   0.0   0.0   0:00.20 rcu_sched
      9 root      20   0       0      0      0 I   0.0   0.0   0:00.00 rcu_bh
     10 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 migration/0
     11 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 watchdog/0
     12 root      20   0       0      0      0 S   0.0   0.0   0:00.00 cpuhp/0
     13 root      20   0       0      0      0 S   0.0   0.0   0:00.00 kdevtmpfs
     14 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 netns
     15 root      20   0       0      0      0 S   0.0   0.0   0:00.00 khungtaskd
     16 root      20   0       0      0      0 S   0.0   0.0   0:00.00 oom_reaper
     17 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 writeback

Load average 1.25 for 1 CPU / 3.5GB RAM node, with no pods.
Why ?

This problem persists by the time. If you wait an hour the load average is similar about 1 …
So is this a bug in load average compute of GKE node linux’ images ? or is this a real problem that could hurt the performance of my pods ?

If I do a top command in a home kubernetes cloud with 3 Debian VM nodes, running pods with Jira and Zammad, the load average is from 0.2 to 0.4.

Anyone can help me ?

Thanks in advance.