Hello there, fellow Kubernetes administrator here. Microk8s caught my interest and I’m currently experimenting with it. During my experiments I noticed a strange behavior, as soon as the cluster is created about 4GiB of RAM are in use, as time goes on the cluster memory usage continues growing about 0.25GiB per hour, sometimes even more. I’d like to isolate the problem but have been unable to do so.
Here’s some information about the setup used:
- 3 Raspberry PI 4 4GB
- Ubuntu 20.04.1 with fresh install
- Microk8s 1.19/stable
- HA enabled
I noticed this behavior through Lens during my tests, however the strange increments in memory utilization are present in a freshly-installed cluster. I verified this happens through Lens (Prometheus) and kubectl top (metrics-server). When I installed Rook/Ceph on cluster the problem became more apparent as monitors slowly started consuming more and more memory even though the OSDs are actually stopped and no pod is consuming resources offered by Ceph.
At some point the cluster stops responding (no connection to the apiserver), the (systemd units) services are still running on each node but the resources are still committed, no (Kubernetes) service or internal responds.
Logs are then flooded with:
apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}
E1125 leaderelection.go:321] error retrieving resource lock kube-system/kube-controller-manager: Get "https://127.0.0.1:16443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s": context deadline exceeded
and similar messages.
Rebooting all the hosts restores the cluster to the precedent state, however the problem persists. Has somebody encountered this issue? Is it only limited to RPI4+MK8S? Thank you.