I have question about if this is “normal” for day to day usage,
I have 5 nodes VM on prem, 8 core, 16 GB, running Debian 12, and microk8s 1.28, I am wonddering why my memory always seems to be full, my findings is that on average, every node (granted, i am using all 5 node as master node) use this memory usage
iqbal@k8s-node01:/var/snap/microk8s/current/args$ sudo systemd-cgtop | grep microk8s
system.slice/snap-microk8s-6683.mount - - 20.0K - -
system.slice/snap-microk8s-6750.mount - - 72.0K - -
system.slice/snap.microk8s.daemon-apiserver-kicker.service 2 - 20.7M - -
system.slice/snap.microk8s.daemon-cluster-agent.service 9 - 19.3M - -
system.slice/snap.microk8s.daemon-containerd.service 1102 - 1.0G - -
system.slice/snap.microk8s.daemon-k8s-dqlite.service 20 - 2.5G - -
system.slice/snap.microk8s.daemon-kubelite.service 27 - 1.8G - -
The resource managed is like this
I am running whole bunch, with 98 namespace (we are implementing single service, single namespace), 112 CRD,
I am thinking if every master node use about 5 GB of memory on 16 GB VM, isn’t it a bit too much ? or is this expected, reading more on GKE usage, its more like 2.6 GB for their 16 GB VM (Plan GKE Standard node sizes | Google Kubernetes Engine (GKE) | Google Cloud)
its like 1/3 of my memory is “just” for running k8s ?