Error: the API server does not have TokenRequest endpoints enabled

Cluster information:

Kubernetes version: v1.18.14
Cloud being used: bare-metal
Installation method: kubeadm
Host OS: Description: Ubuntu 20.04.1 LTS
CNI and version:
CRI and version: Docker version 19.03.13, build 4484c46d9d

What I want to setup

basically I want to enable this Service Account Token Volume Projection. I need to enable this because I want to set up Istio configuration for trustworthy JWTs by following Istio’s blog.

What I did

I followed the official doc Service Account Token Volume Projection .

I changed my existing cluster (which was set up by kubeadm) by making a yaml file apiserver_config.yaml

apiServer:
  extraArgs:
    authorization-mode: Node,RBAC
    service-account-issuer: kubernetes.default.svc
    service-account-signing-key-file: /etc/kubernetes/pki/sa.key
    api-audiences: api,vault,factors
    feature-gates: TokenRequest=true,TokenRequestProjection=false
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.18.14
networking:
  dnsDomain: cluster.local
  podSubnet: 192.168.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}

note that I added the line feature-gates: TokenRequest=true,TokenRequestProjection=false. It doesn’t change the result if I remove it. the value I set should be the default value official doc.

Then I ran the command to update the cluster.

kubeadm init --config apiserver_config.yaml --ignore-preflight-errors=all

I can confirm the generated /etc/kubernetes/manifests/kube-apiserver.yaml has the above set arguments.

apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.1.59:6443
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=192.168.1.59
    - --allow-privileged=true
    - --api-audiences=api,vault,factors
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    - --etcd-servers=https://127.0.0.1:2379
    - --feature-gates=TokenRequest=true,TokenRequestProjection=false
    - --insecure-port=0
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=6443
    - --service-account-issuer=kubernetes.default.svc
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
    - --service-cluster-ip-range=10.96.0.0/12
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    image: k8s.gcr.io/kube-apiserver:v1.18.14
    imagePullPolicy: IfNotPresent
.....

Then I ran a test pd nginx per mentioned in the doc

kubectl create -f pod-projected-svc-token.yaml

where pod-projected-svc-token.yaml is

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - image: nginx
    name: nginx
    volumeMounts:
    - mountPath: /var/run/secrets/tokens
      name: vault-token
  serviceAccountName: default
  volumes:
  - name: vault-token
    projected:
      sources:
      - serviceAccountToken:
          path: vault-token
          expirationSeconds: 7200
          audience: vault

Then nginx is always in ContainerCreating state and I’ve got the issue in Events ( I looked that in Lens):
MountVolume.SetUp failed for volume “vault-token” : failed to fetch token: the API server does not have TokenRequest endpoints enabled

what I have debugged

  • In Lens, I can see the pods kube-apiserver-k8s-master’s Command
kube-apiserver --advertise-address=192.168.1.59 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key

not sure why it doesn’t contain the args that I provided in apiserver_config.yaml. and the args also appear in /etc/kubernetes/manifests/kube-apiserver.yaml. I tried to directly change /etc/kubernetes/manifests/kube-apiserver.yaml and I can still not seeing the args in Command.

I ran

docker ps --no-trunc | grep apiserver

and get ecf7ae8ba2e42a760a8700ffc0c051c7fd2fd4bf4326f96dff5951cc5b3a33d1 sha256:d4e7de4ee6a8e514e1beb10194d14e722b094108a399611bb1a1fa2478e272fb “kube-apiserver --advertise-address=192.168.1.59 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key” 2 hours ago Up 2 hours k8s_kube-apiserver_kube-apiserver-k8s-master_kube-system_319829aafda777d09c6434c43d25edd8_6
b2807e238e9020e44f11a4c3b32b2290b0b2f732912c6015fadd9ab8fb18690f k8s.gcr.io/pause:3.2 “/pause” 2 hours ago Up 2 hours k8s_POD_kube-apiserver-k8s-master_kube-system_319829aafda777d09c6434c43d25edd8_6.

Same thing, it doesn’t has the args that I provided.

  • I also confirm the config maps kubeadm-config has the args there
apiServer:
  extraArgs:
    api-audiences: api,vault,factors
    authorization-mode: Node,RBAC
    feature-gates: TokenRequest=true,TokenRequestProjection=false
    service-account-issuer: kubernetes.default.svc
    service-account-signing-key-file: /etc/kubernetes/pki/sa.key
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.18.14
networking:
  dnsDomain: cluster.local
  podSubnet: 192.168.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}
  • I also tried multiple times to delete the pod
    kube-apiserver-k8s-master. and it starts automatically since it is static pod, which is expected. but still pod nginx shows the same error.

  • I looked into the log of kube-apiserver-k8s-master, it has

Trace[1399864477]: [718.63684ms] [718.592277ms] Object deleted from database
I1226 17:23:27.736115       1 trace.go:116] Trace[106340958]: "GuaranteedUpdate etcd3" type:*core.Event (started: 2020-12-26 17:23:26.811142494 +0000 UTC m=+9.355397383) (total time: 924.95379ms):
Trace[106340958]: [584.988642ms] [584.988642ms] initial value restored
Trace[106340958]: [714.736097ms] [129.747455ms] Transaction prepared
Trace[106340958]: [924.934133ms] [210.198036ms] Transaction committed
I1226 17:23:27.736308       1 trace.go:116] Trace[636115254]: "Patch" url:/api/v1/namespaces/kube-system/events/kube-controller-manager-k8s-master.165452fd3968eb04,user-agent:kubelet/v1.18.14 (linux/amd64) kubernetes/89182bd,client:192.168.1.59 (started: 2020-12-26 17:23:26.796879389 +0000 UTC m=+9.341134276) (total time: 939.404906ms):
Trace[636115254]: [599.253392ms] [585.064529ms] About to apply patch
Trace[636115254]: [939.263245ms] [336.680597ms] Object stored in database
E1226 17:23:27.927825       1 customresource_handler.go:652] error building openapi models for tenants.minio.min.io: ERROR $root.definitions.io.min.minio.v1.Tenant.properties.spec.properties.console.properties.env.items.<array>.properties.valueFrom.properties.resourceFieldRef.properties.divisor has invalid property: anyOf
ERROR $root.definitions.io.min.minio.v1.Tenant.properties.spec.properties.console.properties.resources.properties.limits.additionalProperties.schema has invalid property: anyOf
ERROR $root.definitions.io.min.minio.v1.Tenant.properties.spec.properties.console.properties.resources.properties.requests.additionalProperties.schema has invalid property: anyOf
ERROR $root.definitions.io.min.minio.v1.Tenant.properties.spec.properties.env.items.<array>.properties.valueFrom.properties.resourceFieldRef.properties.divisor has invalid property: anyOf
ERROR $root.definitions.io.min.minio.v1.Tenant.properties.spec.properties.zones.items.<array>.properties.resources.properties.limits.additionalProperties.schema has invalid property: anyOf
ERROR $root.definitions.io.min.minio.v1.Tenant.properties.spec.properties.zones.items.<array>.properties.resources.properties.requests.additionalProperties.schema has invalid property: anyOf
ERROR $root.definitions.io.min.minio.v1.Tenant.properties.spec.properties.zones.items.<array>.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.limits.additionalProperties.schema has invalid property: anyOf
ERROR $root.definitions.io.min.minio.v1.Tenant.properties.spec.properties.zones.items.<array>.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.requests.additionalProperties.schema has invalid property: anyOf
ERROR $root.definitions.io.min.minio.v1.Tenant.properties.spec.properties.zones.items.<array>.properties.volumeClaimTemplate.properties.status.properties.capacity.additionalProperties.schema has invalid property: anyOf
I1226 17:23:27.929512       1 client.go:361] parsed scheme: "endpoint"
I1226 17:23:27.929559       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1226 17:23:27.992260       1 client.go:361] parsed scheme: "endpoint"
I1226 17:23:27.992676       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1226 17:23:28.055688       1 client.go:361] parsed scheme: "endpoint"
I1226 17:23:28.055752       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1226 17:23:28.115683       1 client.go:361] parsed scheme: "endpoint"
I1226 17:23:28.115936       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1226 17:23:28.163390       1 client.go:361] parsed scheme: "endpoint"
I1226 17:23:28.163616       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1226 17:23:28.213477       1 client.go:361] parsed scheme: "endpoint"
I1226 17:23:28.213524       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1226 17:23:28.290066       1 client.go:361] parsed scheme: "endpoint"
I1226 17:23:28.290312       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1226 17:23:28.403694       1 client.go:361] parsed scheme: "endpoint"
I1226 17:23:28.403757       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1226 17:23:28.476280       1 client.go:361] parsed scheme: "endpoint"
I1226 17:23:28.476558       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1226 17:23:28.530488       1 client.go:361] parsed scheme: "endpoint"
I1226 17:23:28.530761       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1226 17:23:28.612711       1 client.go:361] parsed scheme: "endpoint"
I1226 17:23:28.612766       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1226 17:23:44.204397       1 controller.go:606] quota admission added evaluator for: endpoints
I1226 17:23:59.197989       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1226 17:23:59.268063       1 client.go:361] parsed scheme: "endpoint"
I1226 17:23:59.268573       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1226 17:23:59.450159       1 client.go:361] parsed scheme: "endpoint"
I1226 17:23:59.450703       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1226 17:23:59.609354       1 client.go:361] parsed scheme: "endpoint"
I1226 17:23:59.609782       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1226 17:23:59.709935       1 client.go:361] parsed scheme: "endpoint"
I1226 17:23:59.710051       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1226 17:23:59.804826       1 client.go:361] parsed scheme: "endpoint"
I1226 17:23:59.804922       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1226 17:23:59.899411       1 client.go:361] parsed scheme: "endpoint"
I1226 17:23:59.899501       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1226 17:24:00.019675       1 client.go:361] parsed scheme: "endpoint"
I1226 17:24:00.022049       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1226 17:24:00.126729       1 client.go:361] parsed scheme: "endpoint"
I1226 17:24:00.126816       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1226 17:24:00.243507       1 client.go:361] parsed scheme: "endpoint"
I1226 17:24:00.243602       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1226 17:26:25.215475       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
I1226 17:29:44.528266       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1226 17:29:44.543766       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
E1226 17:29:47.240148       1 status.go:71] apiserver received an error that is not an metav1.Status: &url.Error{Op:"Get", URL:"https://192.168.1.59:10250/containerLogs/kube-system/kube-apiserver-k8s-master/kube-apiserver?sinceTime=2020-12-26T17%3A26%3A26Z&timestamps=true", Err:(*net.OpError)(0xc0055e6e60)}
I1226 17:29:50.010735       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I1226 17:29:50.080789       1 controller.go:606] quota admission added evaluator for: deployments.apps
I1226 17:29:50.313209       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I1226 17:30:04.835270       1 trace.go:116] Trace[491737637]: "GuaranteedUpdate etcd3" type:*core.Pod (started: 2020-12-26 17:30:02.901915029 +0000 UTC m=+405.446170024) (total time: 1.933241297s):
Trace[491737637]: [1.932993507s] [1.918220872s] Transaction committed
I1226 17:30:04.857741       1 trace.go:116] Trace[449146163]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.18.14 (linux/amd64) kubernetes/89182bd/leader-election,client:192.168.1.59 (started: 2020-12-26 17:30:02.886855744 +0000 UTC m=+405.431110730) (total time: 1.970656641s):
Trace[449146163]: [1.970339985s] [1.970304288s] About to write a response
I1226 17:30:04.882046       1 trace.go:116] Trace[2046478116]: "Patch" url:/api/v1/namespaces/kube-system/pods/calico-node-495bw/status,user-agent:kubelet/v1.18.14 (linux/amd64) kubernetes/89182bd,client:192.168.1.59 (started: 2020-12-26 17:30:02.900838882 +0000 UTC m=+405.445093821) (total time: 1.981128297s):
Trace[2046478116]: [1.934538613s] [1.919527052s] Object stored in database
I1226 17:30:04.884362       1 trace.go:116] Trace[1923304681]: "Get" url:/api/v1/namespaces/local-path-storage/endpoints/rancher.io-local-path,user-agent:local-path-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.126.9 (started: 2020-12-26 17:30:03.522451582 +0000 UTC m=+406.066706520) (total time: 1.361854364s):
Trace[1923304681]: [1.361734187s] [1.361710866s] About to write a response
I1226 17:30:04.892823       1 trace.go:116] Trace[1400658057]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.18.14 (linux/amd64) kubernetes/89182bd/leader-election,client:192.168.1.59 (started: 2020-12-26 17:30:03.750998702 +0000 UTC m=+406.295253641) (total time: 1.141696415s):
Trace[1400658057]: [1.141417937s] [1.141345972s] About to write a response

There are some errors, not sure if it is related with this issue.

help needed

How to pass these args into api_server? is it already passed into, but just not shown in docker etc ( maybe it is passed via /etc/kubernetes/manifests/kube-apiserver.yaml, so not sure in docker ps -a). How to solve this issue?

I’ve been debuggin this issue for 2 days without any progress. Your help is highly appreciated. thanks!

Finally I have it resolved. I haven’t found the root cause. my solution is run kubeadm reset to reinstall everything. and it is back to normal.