Microk8s Alloy log spamming requires changes to fs.inotify.max_user_watches

Howdy all. I’m running Microk8s and trying to keep local development aligned with what will eventually be deployed. I’m using Alloy to scrape various metrics and whatnot, and it is dumping endless errors about too many files being watched. I have determined that I need to set fs.inotify.max_user_watches to a larger number to prevent this. I am using a Helm chart to set this security context, and it is of course blocked. So I followed Microk8s instructions here.

This now causes Microk8s to fail to load entirely with Linux logs showing Failed to run kubelet. Failed to create kubelet. The sysctl "fs.inotify.max_user_watches" are not known to be namespaced. This is a hard failure.

Sooooo yeah. WTH am I doing wrong?

So my situation gets worse.

I scrolled down to the kubelet section and clicked on the link to the docs for kubelet, and there is my sysctls parameter right near the top… and it’s deprecated. So now, the solution that I have is a bad solution even if I could get it to work.

UPDATE: Ahh. I misread. Not deprecated. Manually editing the config file achieves the correct method. I’m still stuck on the failed kubelet creation, though.

So I have solved my problem. The Failed to create kubelet error was from Microk8s. For some reason, Microk8s was filtering configuration settings. I assume there is some security reason behind this behavior, but it means that Microk8s and Linux were out of alignment in their opinions. In this situation, fs.inotify.max_user_watches is considered unsafe or “namespaced” by Linux, but Microk8s does not recognize it as such, so if you try to change it through Microk8s configuration system, it blocks you. Very weird. Very annoying.

This means that I needed to edit the Linux config directly. If running a cluster, this is fine, since you are likely installing Microk8s on each node directly, and you can provision each node however you see fit. But when using Microk8s locally, you are usually using the microk8s install command, which creates and provisions its own node. It does not offer any way to change that node on creation. You can only do post-creation config changes as seen in the link I provided. But Microk8s blocks it.

So I solved it by sidestepping Microk8s entirely. I am using Multipass to create a node named microk8s-vm. Unfortunately, trying to then run microk8s install won’t install and provision Microk8s on this node, since Microk8s’ “am I installed?” check simply looks for the node named microk8s-vm. It doesn’t check to see if anything is installed on it. Thus I need to install Microk8s on the Linux level.

multipass exec $node_name -- sudo snap install microk8s --classic --channel=1.27

Ignore the channel. Version 1.28+ uses a version of CoreDNS that has a bug involving UDP header size blocking external Internet access.

After this, running microk8s install the local Microk8s sets itself to send commands to the Microk8s VM. It is not correctly configured, though, so it will require three commands to fix.

microk8s refresh-certs -e ca.crt

This generates new TLS certs for cluster access. This will update the Microk8s config but does not correctly update the local kubectl config nor does it correctly update Microk8s’s kubectl config. This is the location on Windows.

microk8s config > ~/.kube/config

This takes the valid microk8s config, which is a kubectl config, and simply writes it into your local kubectl config. If you are using the microk8s aliased kubectl like me, you need to do something similarly.

microk8s config > ~/AppData/Local/MicroK8s/config

I had originally tried to use kubectl config set to elegantly set only the values I wanted, but after Microk8s changed the config structure at some point in the last year, I gave up and just dumped the entire microk8s config into both files.

So yeah. This problem is solved. Sadly, this did not solve the problem for which I was trying to set the watchers limit, which also seems to be a problem with Microk8s, but that’s another story.