Are Prometheus/AlertManager alerts firing on kube-system services normal with microk8s?

With the default configuration I got this:

My cluster does seems to be healthy, everything works fine.

microk8s inspectgives good results:

Inspecting services
Service snap.microk8s.daemon-cluster-agent is running
Service snap.microk8s.daemon-containerd is running
Service snap.microk8s.daemon-kubelite is running
Service snap.microk8s.daemon-k8s-dqlite is running
Service snap.microk8s.daemon-apiserver-kicker is running
Copy service arguments to the final report tarball

But it is true if I check my kubernetes resources under the kube-system namespace, I do not see any job called kube-scheduler or apiserver or kube-controller-manager or anything else in this list.

Is it safe to just ignore and silence these alerts?

Update, my Prometheus PVC was full. I added some space and the two alerts who had different date (KubeApi and Kubelet) stopped. I still have these three remaining: KubeControllerManager, KubeProxy and KubeScheduler, and still don’t see job with the names in the cluster resources.

How was prometheus installed?

Did you use the prometheus-community charts or was it all configured via Thanos?

Did you do anything to enable discovery or was this all just defaults?

Prometheus was installed via the community chart, specifically kube-prometheus-stack. It comes with a bunch of discovery and alerting rules pre-configured.

I mostly use the default, the only thing I did was adding a podmonitor and servicemonitor resources in my cluster to let it monitor my own services with a custom label.

Installing kube-prometheus-stack in a fresh kind cluster locally, there are a few things it doesn’t seem to know how to talk to. Most problems I run into with Prometheus in k8s tend to have some labeling tricks invloved, but for the cluster components it might just be a matter of tinkering with your values:

Hmm thanks, but I don’t see any values worth changing in these lists? What would you recommend tinkering? They seems like pretty standard stuff.

The alerting rules are kinda simple too. job="kube-controller-manager", job="kube-proxy" and job="kube-scheduler".

But no, microk8s doesn’t seems to create jobs with these names:

kubectl -n kube-system get jobs
No resources found in kube-system namespace.

That’s my question, is this normal? Should microk8s deploy these jobs, or should I disable the alerts with enable false in the values files you listed?

As for pods in the kube-system namespace, all I see is calico and coredns. Again, is this normal?

Not sure that the job="thing" translates to jobs in the cluster, because these are typically static pods.

I was thinking you would just set the endpoint IP to whatever the pod’s IP is, but I’m not so sure about that.

The prom stack deployment seems to create services that use labels in kube-system:

% kubectl -n kube-system get svc
NAME                                            TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                        AGE
kube-dns                                        ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP         4d20h
kube-prometheus-stack-coredns                   ClusterIP   None         <none>        9153/TCP                       75m
kube-prometheus-stack-kube-controller-manager   ClusterIP   None         <none>        10257/TCP                      75m
kube-prometheus-stack-kube-etcd                 ClusterIP   None         <none>        2381/TCP                       75m
kube-prometheus-stack-kube-proxy                ClusterIP   None         <none>        10249/TCP                      75m
kube-prometheus-stack-kube-scheduler            ClusterIP   None         <none>        10259/TCP                      75m
kube-prometheus-stack-kubelet                   ClusterIP   None         <none>        10250/TCP,10255/TCP,4194/TCP   75m

In the kind cluster I’m looking at, it seems to bind kube-controller-manager to localhost and I don’t think that can be exposed.

 % kubectl -n kube-system get pods -l component=kube-controller-manager -oyaml | yq '.items[]|.spec.containers[]|.command'
- kube-controller-manager
- --allocate-node-cidrs=true
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-cidr=10.244.0.0/16
- --cluster-name=kind
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --enable-hostpath-provisioner=true
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/16
- --use-service-account-credentials=true

This could be a matter of getting minikube to expose the components. The documentation on this page has some choice minikube flags that I wouldn’t run without knowing about this:

1 Like