Why does my pod queries Kubernetes API with system:anonymous instead of its service account?

Hi everyone,

It seems a pod in my cluster is using the system:anonymous user instead of its service account to make calls to the Kubernetes API:

$ kubectl exec -it promtail-zs6r2 -c promtail -- /dev/curl-amd64 -ks "https://kubernetes:443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0"
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "pods is forbidden: User \"system:anonymous\" cannot list resource \"pods\" in API group \"\" at the cluster scope",
  "reason": "Forbidden",
  "details": {
    "kind": "pods"
  },
  "code": 403
}

The service account is clearly linked to the pod:

$ kubectl get pod promtail-zs6r2 -o jsonpath='{.spec.serviceAccountName}'
promtail
$ kubectl auth can-i --as=system:serviceaccount:default:promtail list pod
yes

I am using minikube with Calico for the CNI.

Thanks for any help!

Cluster information:

Kubernetes version: v1.20.2
Cloud being used: (put bare-metal if not on a public cloud): minikube v1.18.1
Installation method: minikube start --network-plugin cni
Host OS: VirtualBox
CNI and version: Calico (very recent version, not sure exactly which one)
CRI and version: Docker

Using that curl you are not authenticating with the API server, you can check how to retrieve the BEARER TOKEN here:

In your scenario, the Service Account is secret mounted probably in /var/run/secrets/kubernetes.io/serviceaccount (check the pod spec).

So the curl should be something like:

export TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) 
curl -ks  --header "Authorization: Bearer $TOKEN" "https://kubernetes:443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0"

And if you need to execute it from our local computer:

TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='promtail')].data.token}"|base64 --decode)
/dev/curl-amd64 -ks  --header "Authorization: Bearer $TOKEN" "https://kubernetes:443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0"

Hi @rael . Great, thanks for you help. So I did what you suggested and that works fine (i.e. the curl succeeds). Now what is puzzling to me is that Promtail gives me the following error messages when it starts:

E0401 17:00:54.581785       1 reflector.go:127] github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:451: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
E0401 17:00:55.787988       1 reflector.go:127] github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:451: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused

I really don’t understand why, because clearly (1) the pod can access the Kubernetes API and (2) the pod has the necessary RBAC in place… Any idea?

Is 10.96.0.1 the Kubernetes service IP? A connection refused is like that IP:Port is not where the kubernetes API is listening.

If your are using the kubernetes Service ClusterIp, is accessible in the KUBERNETES_SERVICE_HOST env var:

❯ k exec -ti <<pod>> -- env | grep KUBERNETES
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT_443_TCP=tcp://10.50.6.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.50.6.1
KUBERNETES_SERVICE_HOST=10.50.6.1
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.50.6.1:443

Hi @rael . Sorry, I should have mentioned: yes 10.96.0.1 is the Kubernetes service IP:

$ k exec -it promtail-9l8sb -c promtail -- env | grep KUBERNETES
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_SERVICE_HOST=10.96.0.1

I really don’t know what may be happening, if the same curl to https://10.96.0.1:443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0 using the service account token works from that container, it doesn’t make much sense the connection refused error.

Can you increase the log level to see if there is something else is happening there?

OK, I found out that if I disable Istio, Promtail doesn’t show any error message. So I guess the problem is somewhere at the level of the side-car proxy. I will check its logs. Apologies, I didn’t realize Istio could be the source of the problem…

OK, I found a workaround, described here.

In essence, I need to add the following annotations to the pod:

  traffic.sidecar.istio.io/includeOutboundIPRanges: "*"
  traffic.sidecar.istio.io/excludeOutboundIPRanges: 10.96.0.1/32

That tells Envoy to leave any traffic to 10.96.0.1 untouched.