It seems a pod in my cluster is using the system:anonymous user instead of its service account to make calls to the Kubernetes API:
$ kubectl exec -it promtail-zs6r2 -c promtail -- /dev/curl-amd64 -ks "https://kubernetes:443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0"
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "pods is forbidden: User \"system:anonymous\" cannot list resource \"pods\" in API group \"\" at the cluster scope",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}
The service account is clearly linked to the pod:
$ kubectl get pod promtail-zs6r2 -o jsonpath='{.spec.serviceAccountName}'
promtail
$ kubectl auth can-i --as=system:serviceaccount:default:promtail list pod
yes
I am using minikube with Calico for the CNI.
Thanks for any help!
Cluster information:
Kubernetes version: v1.20.2
Cloud being used: (put bare-metal if not on a public cloud): minikube v1.18.1
Installation method: minikube start --network-plugin cni
Host OS: VirtualBox
CNI and version: Calico (very recent version, not sure exactly which one)
CRI and version: Docker
Hi @rael . Great, thanks for you help. So I did what you suggested and that works fine (i.e. the curl succeeds). Now what is puzzling to me is that Promtail gives me the following error messages when it starts:
E0401 17:00:54.581785 1 reflector.go:127] github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:451: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
E0401 17:00:55.787988 1 reflector.go:127] github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:451: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
I really don’t understand why, because clearly (1) the pod can access the Kubernetes API and (2) the pod has the necessary RBAC in place… Any idea?
I really don’t know what may be happening, if the same curl to https://10.96.0.1:443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0 using the service account token works from that container, it doesn’t make much sense the connection refused error.
Can you increase the log level to see if there is something else is happening there?
OK, I found out that if I disable Istio, Promtail doesn’t show any error message. So I guess the problem is somewhere at the level of the side-car proxy. I will check its logs. Apologies, I didn’t realize Istio could be the source of the problem…