Unable to mount kube-api-access volume and the pod deployment fails

Asking for help? Comment out what you need so we can get more information to help you!

Cluster information:

Kubernetes version: v1.22.0
Cloud being used: (put bare-metal if not on a public cloud) bare-metal
Installation method: kubeadm
Host OS: Ubuntu 20.04.3 LTS
CNI and version: flannel:v0.14.0
CRI and version: docker.io:20.10.7

You can format your yaml by highlighting it and pressing Ctrl-Shift-C, it will make your output easier to read.

Unable to mount kube-api-access volume and the pod deployment fails

$ kubectl get daemonset.apps/kube-flannel-ds -n kube-system -o wide
NAME              DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE   CONTAINERS     IMAGES                           SELECTOR
kube-flannel-ds   43        43        43      43           43          <none>          41d   kube-flannel   quay.io/coreos/flannel:v0.14.0   app=flannel

I get the following error trying to deploy a pod

Warning FailedMount 34s (x3 over 36s) kubelet, dell-cn-07 MountVolume.SetUp failed for volume "kube-api-access-blrvk" : object "default"/"kube-root-ca.crt" not registered

$ kubectl get sa NAME SECRETS AGE default 1 41d nfs-subdir-external-provisioner 1 28d

$ kubectl get secret NAME TYPE DATA AGE default-token-9d7kp kubernetes.io/service-account-token 3 41d nfs-subdir-external-provisioner-token-lhpnx kubernetes.io/service-account-token 3 28d sh.helm.release.v1.nfs-subdir-external-provisioner.v1 helm.sh/release.v1 1 28d

Sep 20 12:42:57 dell-cn-07 kubelet[9388]: E0920 12:42:57.511363    9388 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/8326b4c3-6944-41f5-bc56-007496c6a2de-kube-api-access-ttn8m podName:8326b4c3-6944-41f5-bc56-007496c6a2de nodeName:}" failed. No retries permitted until 2021-09-20 12:42:58.5113253 +0000 UTC m=+103940.463831506 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ttn8m" (UniqueName: "kubernetes.io/projected/8326b4c3-6944-41f5-bc56-007496c6a2de-kube-api-access-ttn8m") pod "rrd-inventory-dell-cn-07--1-mn5td" (UID: "8326b4c3-6944-41f5-bc56-007496c6a2de") : object "default"/"kube-root-ca.crt" not registered
Sep 20 12:42:57 dell-cn-07 kubelet[9388]: I0920 12:42:57.905267    9388 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="16dbaead12732316ed1b567aa75f37cfb7741698a41295dbdceae4b641bc8428"
Sep 20 12:42:58 dell-cn-07 kubelet[9388]: E0920 12:42:58.518607    9388 projected.go:293] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
Sep 20 12:42:58 dell-cn-07 kubelet[9388]: E0920 12:42:58.518669    9388 projected.go:199] Error preparing data for projected volume kube-api-access-ttn8m for pod default/rrd-inventory-dell-cn-07--1-mn5td: object "default"/"kube-root-ca.crt" not registered
Sep 20 12:42:58 dell-cn-07 kubelet[9388]: E0920 12:42:58.518796    9388 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/8326b4c3-6944-41f5-bc56-007496c6a2de-kube-api-access-ttn8m podName:8326b4c3-6944-41f5-bc56-007496c6a2de nodeName:}" failed. No retries permitted until 2021-09-20 12:43:00.518753528 +0000 UTC m=+103942.471259738 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ttn8m" (UniqueName: "kubernetes.io/projected/8326b4c3-6944-41f5-bc56-007496c6a2de-kube-api-access-ttn8m") pod "rrd-inventory-dell-cn-07--1-mn5td" (UID: "8326b4c3-6944-41f5-bc56-007496c6a2de") : object "default"/"kube-root-ca.crt" not registered
Sep 20 12:42:59 dell-cn-07 kubelet[9388]: I0920 12:42:59.022140    9388 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"uservol0\" (UniqueName: \"kubernetes.io/host-path/8326b4c3-6944-41f5-bc56-007496c6a2de-uservol0\") pod \"8326b4c3-6944-41f5-bc56-007496c6a2de\" (UID: \"8326b4c3-6944-41f5-bc56-007496c6a2de\") "
Sep 20 12:42:59 dell-cn-07 kubelet[9388]: I0920 12:42:59.022257    9388 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"uservol1\" (UniqueName: \"kubernetes.io/host-path/8326b4c3-6944-41f5-bc56-007496c6a2de-uservol1\") pod \"8326b4c3-6944-41f5-bc56-007496c6a2de\" (UID: \"8326b4c3-6944-41f5-bc56-007496c6a2de\") "
Sep 20 12:42:59 dell-cn-07 kubelet[9388]: I0920 12:42:59.022317    9388 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8326b4c3-6944-41f5-bc56-007496c6a2de-sys\") pod \"8326b4c3-6944-41f5-bc56-007496c6a2de\" (UID: \"8326b4c3-6944-41f5-bc56-007496c6a2de\") "
Sep 20 12:42:59 dell-cn-07 kubelet[9388]: I0920 12:42:59.022319    9388 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8326b4c3-6944-41f5-bc56-007496c6a2de-uservol0" (OuterVolumeSpecName: "uservol0") pod "8326b4c3-6944-41f5-bc56-007496c6a2de" (UID: "8326b4c3-6944-41f5-bc56-007496c6a2de"). InnerVolumeSpecName "uservol0". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 20 12:42:59 dell-cn-07 kubelet[9388]: I0920 12:42:59.022375    9388 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"etcred\" (UniqueName: \"kubernetes.io/host-path/8326b4c3-6944-41f5-bc56-007496c6a2de-etcred\") pod \"8326b4c3-6944-41f5-bc56-007496c6a2de\" (UID: \"8326b4c3-6944-41f5-bc56-007496c6a2de\") "
Sep 20 12:42:59 dell-cn-07 kubelet[9388]: I0920 12:42:59.022432    9388 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"hugepages\" (UniqueName: \"kubernetes.io/host-path/8326b4c3-6944-41f5-bc56-007496c6a2de-hugepages\") pod \"8326b4c3-6944-41f5-bc56-007496c6a2de\" (UID: \"8326b4c3-6944-41f5-bc56-007496c6a2de\") "
Sep 20 12:42:59 dell-cn-07 kubelet[9388]: I0920 12:42:59.022432    9388 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8326b4c3-6944-41f5-bc56-007496c6a2de-sys" (OuterVolumeSpecName: "sys") pod "8326b4c3-6944-41f5-bc56-007496c6a2de" (UID: "8326b4c3-6944-41f5-bc56-007496c6a2de"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 20 12:42:59 dell-cn-07 kubelet[9388]: I0920 12:42:59.022501    9388 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"shm\" (UniqueName: \"kubernetes.io/empty-dir/8326b4c3-6944-41f5-bc56-007496c6a2de-shm\") pod \"8326b4c3-6944-41f5-bc56-007496c6a2de\" (UID: \"8326b4c3-6944-41f5-bc56-007496c6a2de\") "
Sep 20 12:42:59 dell-cn-07 kubelet[9388]: I0920 12:42:59.022524    9388 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8326b4c3-6944-41f5-bc56-007496c6a2de-hugepages" (OuterVolumeSpecName: "hugepages") pod "8326b4c3-6944-41f5-bc56-007496c6a2de" (UID: "8326b4c3-6944-41f5-bc56-007496c6a2de"). InnerVolumeSpecName "hugepages". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 20 12:42:59 dell-cn-07 kubelet[9388]: I0920 12:42:59.022481    9388 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8326b4c3-6944-41f5-bc56-007496c6a2de-etcred" (OuterVolumeSpecName: "etcred") pod "8326b4c3-6944-41f5-bc56-007496c6a2de" (UID: "8326b4c3-6944-41f5-bc56-007496c6a2de"). InnerVolumeSpecName "etcred". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 20 12:42:59 dell-cn-07 kubelet[9388]: I0920 12:42:59.022584    9388 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttn8m\" (UniqueName: \"kubernetes.io/projected/8326b4c3-6944-41f5-bc56-007496c6a2de-kube-api-access-ttn8m\") pod \"8326b4c3-6944-41f5-bc56-007496c6a2de\" (UID: \"8326b4c3-6944-41f5-bc56-007496c6a2de\") "
Sep 20 12:42:59 dell-cn-07 kubelet[9388]: I0920 12:42:59.022432    9388 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8326b4c3-6944-41f5-bc56-007496c6a2de-uservol1" (OuterVolumeSpecName: "uservol1") pod "8326b4c3-6944-41f5-bc56-007496c6a2de" (UID: "8326b4c3-6944-41f5-bc56-007496c6a2de"). InnerVolumeSpecName "uservol1". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 20 12:42:59 dell-cn-07 kubelet[9388]: I0920 12:42:59.022667    9388 reconciler.go:319] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8326b4c3-6944-41f5-bc56-007496c6a2de-sys\") on node \"dell-cn-07\" DevicePath \"\""
Sep 20 12:42:59 dell-cn-07 kubelet[9388]: I0920 12:42:59.022705    9388 reconciler.go:319] "Volume detached for volume \"etcred\" (UniqueName: \"kubernetes.io/host-path/8326b4c3-6944-41f5-bc56-007496c6a2de-etcred\") on node \"dell-cn-07\" DevicePath \"\""
Sep 20 12:42:59 dell-cn-07 kubelet[9388]: I0920 12:42:59.022740    9388 reconciler.go:319] "Volume detached for volume \"hugepages\" (UniqueName: \"kubernetes.io/host-path/8326b4c3-6944-41f5-bc56-007496c6a2de-hugepages\") on node \"dell-cn-07\" DevicePath \"\""
Sep 20 12:42:59 dell-cn-07 kubelet[9388]: I0920 12:42:59.022772    9388 reconciler.go:319] "Volume detached for volume \"uservol0\" (UniqueName: \"kubernetes.io/host-path/8326b4c3-6944-41f5-bc56-007496c6a2de-uservol0\") on node \"dell-cn-07\" DevicePath \"\""
Sep 20 12:42:59 dell-cn-07 kubelet[9388]: I0920 12:42:59.022803    9388 reconciler.go:319] "Volume detached for volume \"uservol1\" (UniqueName: \"kubernetes.io/host-path/8326b4c3-6944-41f5-bc56-007496c6a2de-uservol1\") on node \"dell-cn-07\" DevicePath \"\""
Sep 20 12:42:59 dell-cn-07 kubelet[9388]: I0920 12:42:59.026535    9388 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8326b4c3-6944-41f5-bc56-007496c6a2de-shm" (OuterVolumeSpecName: "shm") pod "8326b4c3-6944-41f5-bc56-007496c6a2de" (UID: "8326b4c3-6944-41f5-bc56-007496c6a2de"). InnerVolumeSpecName "shm". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
Sep 20 12:42:59 dell-cn-07 kubelet[9388]: I0920 12:42:59.026690    9388 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8326b4c3-6944-41f5-bc56-007496c6a2de-kube-api-access-ttn8m" (OuterVolumeSpecName: "kube-api-access-ttn8m") pod "8326b4c3-6944-41f5-bc56-007496c6a2de" (UID: "8326b4c3-6944-41f5-bc56-007496c6a2de"). InnerVolumeSpecName "kube-api-access-ttn8m". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 20 12:42:59 dell-cn-07 kubelet[9388]: I0920 12:42:59.123516    9388 reconciler.go:319] "Volume detached for volume \"kube-api-access-ttn8m\" (UniqueName: \"kubernetes.io/projected/8326b4c3-6944-41f5-bc56-007496c6a2de-kube-api-access-ttn8m\") on node \"dell-cn-07\" DevicePath \"\""
Sep 20 12:42:59 dell-cn-07 kubelet[9388]: I0920 12:42:59.123585    9388 reconciler.go:319] "Volume detached for volume \"shm\" (UniqueName: \"kubernetes.io/empty-dir/8326b4c3-6944-41f5-bc56-007496c6a2de-shm\") on node \"dell-cn-07\" DevicePath \"\""

Has any experienced such issue?

Could you share the full kubectl describe and kubectl get pod PODNAMEHERE -oyaml output for one of the flannel pods you’re having issues with?

kubectl get pod rrd-inventory-dell-cn-06--1-kdxr7 -oyaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2021-09-21T05:15:52Z"
  generateName: rrd-inventory-dell-cn-06--1-
  labels:
    controller-uid: 1794692d-bef2-4a3f-b437-b13a80488876
    job-name: rrd-inventory-dell-cn-06
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:generateName: {}
        f:labels:
          .: {}
          f:controller-uid: {}
          f:job-name: {}
        f:ownerReferences:
          .: {}
          k:{"uid":"1794692d-bef2-4a3f-b437-b13a80488876"}: {}
      f:spec:
        f:affinity:
          .: {}
          f:nodeAffinity:
            .: {}
            f:requiredDuringSchedulingIgnoredDuringExecution: {}
        f:containers:
          k:{"name":"rrd-inventory-dell-cn-06"}:
            .: {}
            f:command: {}
            f:env:
              .: {}
              k:{"name":"RED_ETCD"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"RED_ETCD_PARTITION"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"RED_ETCD_USER"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"RRD_ETCD_PARTITION"}:
                .: {}
                f:name: {}
                f:value: {}
            f:image: {}
            f:imagePullPolicy: {}
            f:name: {}
            f:resources:
              .: {}
              f:limits:
                .: {}
                f:hugepages-1Gi: {}
                f:memory: {}
              f:requests:
                .: {}
                f:hugepages-1Gi: {}
                f:memory: {}
            f:securityContext:
              .: {}
              f:capabilities:
                .: {}
                f:add: {}
              f:privileged: {}
            f:terminationMessagePath: {}
            f:terminationMessagePolicy: {}
            f:volumeMounts:
              .: {}
              k:{"mountPath":"/dev/hugepages"}:
                .: {}
                f:mountPath: {}
                f:name: {}
              k:{"mountPath":"/dev/shm"}:
                .: {}
                f:mountPath: {}
                f:name: {}
              k:{"mountPath":"/dev/sys"}:
                .: {}
                f:mountPath: {}
                f:name: {}
              k:{"mountPath":"/etc/red"}:
                .: {}
                f:mountPath: {}
                f:name: {}
              k:{"mountPath":"/home/pasokan"}:
                .: {}
                f:mountPath: {}
                f:name: {}
              k:{"mountPath":"/red"}:
                .: {}
                f:mountPath: {}
                f:name: {}
        f:dnsPolicy: {}
        f:enableServiceLinks: {}
        f:hostIPC: {}
        f:hostNetwork: {}
        f:hostPID: {}
        f:restartPolicy: {}
        f:schedulerName: {}
        f:securityContext: {}
        f:terminationGracePeriodSeconds: {}
        f:volumes:
          .: {}
          k:{"name":"etcred"}:
            .: {}
            f:hostPath:
              .: {}
              f:path: {}
              f:type: {}
            f:name: {}
          k:{"name":"hugepages"}:
            .: {}
            f:hostPath:
              .: {}
              f:path: {}
              f:type: {}
            f:name: {}
          k:{"name":"shm"}:
            .: {}
            f:emptyDir:
              .: {}
              f:medium: {}
            f:name: {}
          k:{"name":"sys"}:
            .: {}
            f:hostPath:
              .: {}
              f:path: {}
              f:type: {}
            f:name: {}
          k:{"name":"uservol0"}:
            .: {}
            f:hostPath:
              .: {}
              f:path: {}
              f:type: {}
            f:name: {}
          k:{"name":"uservol1"}:
            .: {}
            f:hostPath:
              .: {}
              f:path: {}
              f:type: {}
            f:name: {}
    manager: kube-controller-manager
    operation: Update
    time: "2021-09-21T05:15:52Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:conditions:
          k:{"type":"ContainersReady"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"Initialized"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
          k:{"type":"Ready"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
        f:containerStatuses: {}
        f:hostIP: {}
        f:phase: {}
        f:podIP: {}
        f:podIPs:
          .: {}
          k:{"ip":"10.25.50.177"}:
            .: {}
            f:ip: {}
        f:startTime: {}
    manager: kubelet
    operation: Update
    subresource: status
    time: "2021-09-21T05:15:54Z"
  name: rrd-inventory-dell-cn-06--1-kdxr7
  namespace: default
  ownerReferences:
  - apiVersion: batch/v1
    blockOwnerDeletion: true
    controller: true
    kind: Job
    name: rrd-inventory-dell-cn-06
    uid: 1794692d-bef2-4a3f-b437-b13a80488876
  resourceVersion: "18051613"
  uid: 65844f35-f2ed-4e09-8658-13b658cc1e5e
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - dell-cn-06
  containers:
  - command:
    - /red/pkgbuild/red_inst/bin/red-discover
    env:
    - name: RED_ETCD
      value: 10.25.50.13:22379,10.25.50.14:32379,10.25.50.12:2379
    - name: RED_ETCD_USER
      value: etcd_admin_001
    - name: RED_ETCD_PARTITION
      value: __QArrd
    - name: RRD_ETCD_PARTITION
      value: __QArrd
    image: red-images.red.datadirectnet.com:5000/pasokan/red/dbgenv:latest
    imagePullPolicy: Always
    name: rrd-inventory-dell-cn-06
    resources:
      limits:
        hugepages-1Gi: 4Gi
        memory: 4Gi
      requests:
        hugepages-1Gi: 4Gi
        memory: 4Gi
    securityContext:
      capabilities:
        add:
        - IPC_LOCK
        - SYS_PTRACE
        - SYS_NICE
      privileged: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /home/pasokan
      name: uservol0
    - mountPath: /red
      name: uservol1
    - mountPath: /dev/hugepages
      name: hugepages
    - mountPath: /dev/shm
      name: shm
    - mountPath: /etc/red
      name: etcred
    - mountPath: /dev/sys
      name: sys
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-pxlvw
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  hostIPC: true
  hostNetwork: true
  hostPID: true
  nodeName: dell-cn-06
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Never
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - hostPath:
      path: /home/pasokan
      type: Directory
    name: uservol0
  - hostPath:
      path: /home/pasokan/red
      type: Directory
    name: uservol1
  - hostPath:
      path: /dev/hugepages
      type: Directory
    name: hugepages
  - emptyDir:
      medium: Memory
    name: shm
  - hostPath:
      path: /etc/red
      type: Directory
    name: etcred
  - hostPath:
      path: /sys
      type: Directory
    name: sys
  - name: kube-api-access-pxlvw
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2021-09-21T05:15:53Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2021-09-21T05:15:53Z"
    message: 'containers with unready status: [rrd-inventory-dell-cn-06]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2021-09-21T05:15:53Z"
    message: 'containers with unready status: [rrd-inventory-dell-cn-06]'
    reason: ContainersNotReady
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2021-09-21T05:15:53Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://2089a4e22d04f41beb2e14c5ee263d3eee7b4a622e7b9b37ea40550549d1faec
    image: red-images.red.datadirectnet.com:5000/pasokan/red/dbgenv:latest
    imageID: docker-pullable://red-images.red.datadirectnet.com:5000/pasokan/red/dbgenv@sha256:777895ad7755465683ba2d20a076b09810053e013d9b2729b2177399686bae6c
    lastState: {}
    name: rrd-inventory-dell-cn-06
    ready: false
    restartCount: 0
    started: false
    state:
      terminated:
        containerID: docker://2089a4e22d04f41beb2e14c5ee263d3eee7b4a622e7b9b37ea40550549d1faec
        exitCode: 132
        finishedAt: "2021-09-21T05:15:53Z"
        reason: Error
        startedAt: "2021-09-21T05:15:53Z"
  hostIP: 10.25.50.177
  phase: Failed
  podIP: 10.25.50.177
  podIPs:
  - ip: 10.25.50.177
  qosClass: Burstable
  startTime: "2021-09-21T05:15:53Z"
kdpo rrd-inventory-dell-cn-06--1-kdxr7
Name:         rrd-inventory-dell-cn-06--1-kdxr7
Namespace:    default
Priority:     0
Node:         dell-cn-06/10.25.50.177
Start Time:   Mon, 20 Sep 2021 23:15:53 -0600
Labels:       controller-uid=1794692d-bef2-4a3f-b437-b13a80488876
              job-name=rrd-inventory-dell-cn-06
Annotations:  <none>
Status:       Failed
IP:           10.25.50.177
IPs:
  IP:           10.25.50.177
Controlled By:  Job/rrd-inventory-dell-cn-06
Containers:
  rrd-inventory-dell-cn-06:
    Container ID:  docker://2089a4e22d04f41beb2e14c5ee263d3eee7b4a622e7b9b37ea40550549d1faec
    Image:         red-images.red.datadirectnet.com:5000/pasokan/red/dbgenv:latest
    Image ID:      docker-pullable://red-images.red.datadirectnet.com:5000/pasokan/red/dbgenv@sha256:777895ad7755465683ba2d20a076b09810053e013d9b2729b2177399686bae6c
    Port:          <none>
    Host Port:     <none>
    Command:
      /red/pkgbuild/red_inst/bin/red-discover
    State:          Terminated
      Reason:       Error
      Exit Code:    132
      Started:      Mon, 20 Sep 2021 23:15:53 -0600
      Finished:     Mon, 20 Sep 2021 23:15:53 -0600
    Ready:          False
    Restart Count:  0
    Limits:
      hugepages-1Gi:  4Gi
      memory:         4Gi
    Requests:
      hugepages-1Gi:  4Gi
      memory:         4Gi
    Environment:
      RED_ETCD:            10.25.50.13:22379,10.25.50.14:32379,10.25.50.12:2379
      RED_ETCD_USER:       etcd_admin_001
      RED_ETCD_PARTITION:  __QArrd
      RRD_ETCD_PARTITION:  __QArrd
    Mounts:
      /dev/hugepages from hugepages (rw)
      /dev/shm from shm (rw)
      /dev/sys from sys (rw)
      /etc/red from etcred (rw)
      /home/pasokan from uservol0 (rw)
      /red from uservol1 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pxlvw (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  uservol0:
    Type:          HostPath (bare host directory volume)
    Path:          /home/pasokan
    HostPathType:  Directory
  uservol1:
    Type:          HostPath (bare host directory volume)
    Path:          /home/pasokan/red
    HostPathType:  Directory
  hugepages:
    Type:          HostPath (bare host directory volume)
    Path:          /dev/hugepages
    HostPathType:  Directory
  shm:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  <unset>
  etcred:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/red
    HostPathType:  Directory
  sys:
    Type:          HostPath (bare host directory volume)
    Path:          /sys
    HostPathType:  Directory
  kube-api-access-pxlvw:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute for 300s
                             node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason       Age                From                 Message
  ----     ------       ----               ----                 -------
  Normal   Scheduled    <unknown>                               Successfully assigned default/rrd-inventory-dell-cn-06--1-kdxr7 to dell-cn-06
  Normal   Pulling      24m                kubelet, dell-cn-06  Pulling image "red-images.red.datadirectnet.com:5000/pasokan/red/dbgenv:latest"
  Normal   Pulled       24m                kubelet, dell-cn-06  Successfully pulled image "red-images.red.datadirectnet.com:5000/pasokan/red/dbgenv:latest" in 35.678159ms
  Normal   Created      24m                kubelet, dell-cn-06  Created container rrd-inventory-dell-cn-06
  Normal   Started      24m                kubelet, dell-cn-06  Started container rrd-inventory-dell-cn-06
  Warning  FailedMount  24m (x3 over 24m)  kubelet, dell-cn-06  MountVolume.SetUp failed for volume "kube-api-access-pxlvw" : object "default"/"kube-root-ca.crt" not registered
pasokan@red-devhead-0:~$ 

@protosam Surprisingly pod runs on certain nodes and not running on some! any thoughts

not an k8s issue, pod were getting terminated because of application. sorry for the trouble

It’s a weird problem to have. It’s having problems mounting a projected volume that should just work. Seems like a bug.

If this happens again, take note of which nodes this happens on and gather logs for kubelet from those nodes. Also note what the k8s version is for the node as well as the master nodes.

Also confirm the contents of kubectl get cm kube-root-ca.crt -oyaml has the item ca.crt.

If everything looks good, I would consider filing a bug report.

Hi there, I’m seeing this issue too (and the the issue is only with 1.22+). Kubelet logs below

Nov 24 15:25:05 mycluster-worker-1.  hyperkube[1780]: E1124 15:25:05.290747    1780 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/2c8c72a8-0570-49e2-9894-4a944c0ef304-kube-api-access-tncnt podName:2c8c72a8-0570-49e2-9894-4a944c0ef304 nodeName:}" failed. No retries permitted until 2021-11-24 15:25:05.790729968 +0000 UTC m=+465324.567909553 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tncnt" (UniqueName: "kubernetes.io/projected/2c8c72a8-0570-49e2-9894-4a944c0ef304-kube-api-access-tncnt") pod "mypod--1-c2ws5" (UID: "2c8c72a8-0570-49e2-9894-4a944c0ef304") : [object "mynamespace"/"kube-root-ca.crt" not registered, object "mynamespace"/"openshift-service-ca.crt" not registered]
Nov 24 15:25:05 mycluster-worker-1.  hyperkube[1780]: E1124 15:25:05.796505    1780 projected.go:293] Couldn't get configMap mynamespace/kube-root-ca.crt: object "mynamespace"/"kube-root-ca.crt" not registered
Nov 24 15:25:05 mycluster-worker-1.  hyperkube[1780]: E1124 15:25:05.796545    1780 projected.go:293] Couldn't get configMap mynamespace/openshift-service-ca.crt: object "mynamespace"/"openshift-service-ca.crt" not registered
Nov 24 15:25:05 mycluster-worker-1.  hyperkube[1780]: E1124 15:25:05.796556    1780 projected.go:199] Error preparing data for projected volume kube-api-access-tncnt for pod mynamespace/mypod--1-c2ws5: [object "mynamespace"/"kube-root-ca.crt" not registered, object "mynamespace"/"openshift-service-ca.crt" not registered]
Nov 24 15:25:05 mycluster-worker-1.  hyperkube[1780]: E1124 15:25:05.796625    1780 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/2c8c72a8-0570-49e2-9894-4a944c0ef304-kube-api-access-tncnt podName:2c8c72a8-0570-49e2-9894-4a944c0ef304 nodeName:}" failed. No retries permitted until 2021-11-24 15:25:06.796605614 +0000 UTC m=+465325.573785214 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tncnt" (UniqueName: "kubernetes.io/projected/2c8c72a8-0570-49e2-9894-4a944c0ef304-kube-api-access-tncnt") pod "mypod--1-c2ws5" (UID: "2c8c72a8-0570-49e2-9894-4a944c0ef304") : [object "mynamespace"/"kube-root-ca.crt" not registered, object "mynamespace"/"openshift-service-ca.crt" not registered]
Nov 24 15:25:06 mycluster-worker-1.  hyperkube[1780]: I1124 15:25:06.230294    1780 kubelet.go:2114] "SyncLoop (PLEG): event for pod" pod="mynamespace/mypod--1-c2ws5" event=&{ID:2c8c72a8-0570-49e2-9894-4a944c0ef304 Type:ContainerDied Data:7cd43129ee7bb28954d13f357d9440dc5646220fd5de7ade9eaa825b4e913fde}
Nov 24 15:25:06 mycluster-worker-1.  hyperkube[1780]: I1124 15:25:06.230335    1780 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="7cd43129ee7bb28954d13f357d9440dc5646220fd5de7ade9eaa825b4e913fde"
Nov 24 15:25:06 mycluster-worker-1.  hyperkube[1780]: I1124 15:25:06.301584    1780 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tncnt\" (UniqueName: \"kubernetes.io/projected/2c8c72a8-0570-49e2-9894-4a944c0ef304-kube-api-access-tncnt\") pod \"2c8c72a8-0570-49e2-9894-4a944c0ef304\" (UID: \"2c8c72a8-0570-49e2-9894-4a944c0ef304\") "
Nov 24 15:25:06 mycluster-worker-1.  hyperkube[1780]: I1124 15:25:06.312344    1780 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c8c72a8-0570-49e2-9894-4a944c0ef304-kube-api-access-tncnt" (OuterVolumeSpecName: "kube-api-access-tncnt") pod "2c8c72a8-0570-49e2-9894-4a944c0ef304" (UID: "2c8c72a8-0570-49e2-9894-4a944c0ef304"). InnerVolumeSpecName "kube-api-access-tncnt". PluginName "kubernetes.io/projected", VolumeGidValue ""
Nov 24 15:25:06 mycluster-worker-1.  hyperkube[1780]: I1124 15:25:06.402876    1780 reconciler.go:319] "Volume detached for volume \"kube-api-access-tncnt\" (UniqueName: \"kubernetes.io/projected/2c8c72a8-0570-49e2-9894-4a944c0ef304-kube-api-access-tncnt\") on node \"mycluster-worker-1. \" DevicePath \"\""

Can you describe in a little more detail what happened and how you fixed it?

Have the same error and I don’t know where this volume suddenly appears from. It was not there before.


Unable to attach or mount volumes: unmounted volumes=[storage], unattached volumes=[config storage kube-api-access-7q5rs]: timed out waiting for the condition