Kubernetes : High memory usage by daemonset-pod when using hostPath volume

Cluster information:

kubernetes cluster on bare metal with 4 worker nodes and 1 master
Kubernetes version:
Cloud being used: bare-metal
Installation method:

I’ve consumer-applications which reads (no-write) the database of size ~4GiB and performs some tasks. To make sure same database is not duplicated across applications, I’ve stored it on all node machines of k8s-cluster.


I’ve used one daemonset which is using “hostpath” volume. The daemonset pod extracts the database on each node machine (/var/lib/DATABASE). For health-check of daemonset pod, I’ve written the shell script which checks the modification time of the database file (using date command).

For database extraction, approximately 300MiB memory is required and to perform health-check 50MiB is more than sufficient. Hence I’ve set the memory-request as 100MiB and memory-limit as 1.5GiB. When I run the daemonset, I observed memory usage is high ~300MiB for first 10 seconds (to perform database extraction) and after that it goes down to ~30MiB. The daemonset works fine as per my expectation.

Consumer Application

Now, The consumer applications (written in golang) pods are using same “hostPath” volume (/var/lib/DATABASE) and reading the database from that location (/var/lib/DATABASE). This consumer applications does not perform any write operations on /var/lib/DATABASE directory.

However, when I deploy this consumer application on k8s then I see huge increase in memory usage of the daemonset-pod from 30MiB to 1.5GiB. The memory-usage by daemonset-pods is almost same as that of memory-limit. I am not able to understand this behaviour, why consumer application is causing memory usage of daemonset pod ?

Any help/suggestion/truobleshooting steps would be of great help !!

Note : I’m using 'kubernetes top" command to measure the memory (working-set-bytes).

I’ve found this link (Kubernetes: in-memory shared cache between pods), which says

hostPath by itself poses a security risk, and when used, should be scoped to only the required file or directory, and mounted as ReadOnly. It also comes with the caveat of not knowing who will get “charged” for the memory, so every pod has to be provisioned to be able to absorb it, depending how it is written. It also might “leak” up to the root namespace and be charged to nobody but appear as “overhead”

However, I did not find any reference from official k8s documentation. It would be helpful if someone can elaborate on it.

Following are the content of memory.stat file from daemonset pod.

cat /sys/fs/cgroup/memory/memory.stat*

cache 1562779648
rss 1916928
rss_huge 0
shmem 0
mapped_file 0
dirty 0
writeback 0
swap 0
pgpgin 96346371
pgpgout 95965640
pgfault 224070825
pgmajfault 0
inactive_anon 0
active_anon 581632
inactive_file 37675008
active_file 1522688000
unevictable 0
hierarchical_memory_limit 1610612736
hierarchical_memsw_limit 1610612736
total_cache 1562779648
total_rss 1916928
total_rss_huge 0
total_shmem 0
total_mapped_file 0
total_dirty 0
total_writeback 0
total_swap 0
total_pgpgin 96346371
total_pgpgout 95965640
total_pgfault 224070825
total_pgmajfault 0
total_inactive_anon 0
total_active_anon 581632
total_inactive_file 37675008
total_active_file 1522688000
total_unevictable 0