The connection to the server 192.168.0.127:6443 was refused - did you specify the right host or port?

Asking for help? Comment out what you need so we can get more information to help you!

Cluster information:

Kubernetes version:v1.28.2
Cloud being used: bare-metal
Installation method: yum on RHEL
Host OS: RHEL 9.2 Kernel Version: 5.14.0-284.30.1.el9_2.x86_64
CNI and version: Weave 0.3.0
CRI and version: Containerd 1.6.24

[kubeadmin@linux2 ~]$ kubectl cluster-info
Kubernetes control plane is running at https://192.168.0.127:6443
CoreDNS is running at https://192.168.0.127:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

Hello - I have a simple 2 node setup on 2 RHEL 9.2 servers. I have installed and reinstalled using the Redhat and Kubernetes repositories and regardless of which CNI I install and how many times i do the deployments, the pods enter into a Crash Loop starting with the proxy and then the scheduler until all fail. I have a very basic install so not clear why this is happening. The only potential issue I found was the inability to ping 10.96.0.1 or 10.96.0.10 so unsure what is going on with the api server or setup. Any iinsight is appreciated.

[kubeadmin@linux2 manifests]$ kubectl get pods -n kube-system

NAME READY STATUS RESTARTS AGE
coredns-5dd5756b68-c59kf 1/1 Running 302 (6m35s ago) 28h
coredns-5dd5756b68-sgvmx 1/1 Running 299 (3m18s ago) 28h
etcd-linux2 1/1 Running 388 (4m21s ago) 27h
kube-apiserver-linux2 1/1 Running 39 (3m59s ago) 41m
kube-controller-manager-linux2 1/1 Running 40 (4m52s ago) 40m
kube-proxy-j9j9n 0/1 CrashLoopBackOff 286 (6m12s ago) 28h
kube-proxy-qkm7b 1/1 Running 370 (4m54s ago) 28h
kube-scheduler-linux2 1/1 Running 38 (6m4s ago) 38m

[kubeadmin@linux2 ~]$ kubectl get all -n kube-system -o yaml

apiVersion: v1
items:
- apiVersion: v1
  kind: Pod
  metadata:
    creationTimestamp: "2023-09-28T13:05:29Z"
    generateName: coredns-5dd5756b68-
    labels:
      k8s-app: kube-dns
      pod-template-hash: 5dd5756b68
    name: coredns-5dd5756b68-4h58w
    namespace: kube-system
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: coredns-5dd5756b68
      uid: 38bd5fb5-bfdc-48b4-a06f-7bf28ae75e5f
    resourceVersion: "10314"
    uid: 13edb34e-48de-458d-afc9-16270af41ecf
  spec:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - podAffinityTerm:
            labelSelector:
              matchExpressions:
              - key: k8s-app
                operator: In
                values:
                - kube-dns
            topologyKey: kubernetes.io/hostname
          weight: 100
    containers:
    - args:
      - -conf
      - /etc/coredns/Corefile
      image: registry.k8s.io/coredns/coredns:v1.10.1
      imagePullPolicy: IfNotPresent
      livenessProbe:
        failureThreshold: 5
        httpGet:
          path: /health
          port: 8080
          scheme: HTTP
        initialDelaySeconds: 60
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 5
      name: coredns
      ports:
      - containerPort: 53
        name: dns
        protocol: UDP
      - containerPort: 53
        name: dns-tcp
        protocol: TCP
      - containerPort: 9153
        name: metrics
        protocol: TCP
      readinessProbe:
        failureThreshold: 3
        httpGet:
          path: /ready
          port: 8181
          scheme: HTTP
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 1
      resources:
        limits:
          memory: 170Mi
        requests:
          cpu: 100m
          memory: 70Mi
      securityContext:
        allowPrivilegeEscalation: false
        capabilities:
          add:
          - NET_BIND_SERVICE
          drop:
          - all
        readOnlyRootFilesystem: true
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /etc/coredns
        name: config-volume
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-d7qjd
        readOnly: true
    dnsPolicy: Default
    enableServiceLinks: true
    nodeName: linux2
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: coredns
    serviceAccountName: coredns
    terminationGracePeriodSeconds: 30
    tolerations:
    - key: CriticalAddonsOnly
      operator: Exists
    - effect: NoSchedule
      key: node-role.kubernetes.io/control-plane
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    volumes:
    - configMap:
        defaultMode: 420
        items:
        - key: Corefile
          path: Corefile
        name: coredns
      name: config-volume
    - name: kube-api-access-d7qjd
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T13:27:19Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:13:57Z"
      message: 'containers with unready status: [coredns]'
      reason: ContainersNotReady
      status: "False"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:13:57Z"
      message: 'containers with unready status: [coredns]'
      reason: ContainersNotReady
      status: "False"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T13:27:19Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: containerd://a0ae0504091d0cbb9c9f45b1f912beae1ef68d3bbc3df7e7904ed460b2a411fa
      image: registry.k8s.io/coredns/coredns:v1.10.1
      imageID: registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
      lastState:
        terminated:
          containerID: containerd://a0ae0504091d0cbb9c9f45b1f912beae1ef68d3bbc3df7e7904ed460b2a411fa
          exitCode: 0
          finishedAt: "2023-09-28T17:25:53Z"
          reason: Completed
          startedAt: "2023-09-28T17:25:43Z"
      name: coredns
      ready: false
      restartCount: 41
      started: false
      state:
        waiting:
          message: back-off 5m0s restarting failed container=coredns pod=coredns-5dd5756b68-4h58w_kube-system(13edb34e-48de-458d-afc9-16270af41ecf)
          reason: CrashLoopBackOff
    hostIP: 192.168.0.127
    phase: Running
    podIP: 10.32.0.5
    podIPs:
    - ip: 10.32.0.5
    qosClass: Burstable
    startTime: "2023-09-28T13:27:19Z"
- apiVersion: v1
  kind: Pod
  metadata:
    creationTimestamp: "2023-09-28T13:05:29Z"
    generateName: coredns-5dd5756b68-
    labels:
      k8s-app: kube-dns
      pod-template-hash: 5dd5756b68
    name: coredns-5dd5756b68-vznsz
    namespace: kube-system
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: coredns-5dd5756b68
      uid: 38bd5fb5-bfdc-48b4-a06f-7bf28ae75e5f
    resourceVersion: "10302"
    uid: 339c438f-5f28-401b-9614-9b1a8b09a164
  spec:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - podAffinityTerm:
            labelSelector:
              matchExpressions:
              - key: k8s-app
                operator: In
                values:
                - kube-dns
            topologyKey: kubernetes.io/hostname
          weight: 100
    containers:
    - args:
      - -conf
      - /etc/coredns/Corefile
      image: registry.k8s.io/coredns/coredns:v1.10.1
      imagePullPolicy: IfNotPresent
      livenessProbe:
        failureThreshold: 5
        httpGet:
          path: /health
          port: 8080
          scheme: HTTP
        initialDelaySeconds: 60
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 5
      name: coredns
      ports:
      - containerPort: 53
        name: dns
        protocol: UDP
      - containerPort: 53
        name: dns-tcp
        protocol: TCP
      - containerPort: 9153
        name: metrics
        protocol: TCP
      readinessProbe:
        failureThreshold: 3
        httpGet:
          path: /ready
          port: 8181
          scheme: HTTP
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 1
      resources:
        limits:
          memory: 170Mi
        requests:
          cpu: 100m
          memory: 70Mi
      securityContext:
        allowPrivilegeEscalation: false
        capabilities:
          add:
          - NET_BIND_SERVICE
          drop:
          - all
        readOnlyRootFilesystem: true
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /etc/coredns
        name: config-volume
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-7fsht
        readOnly: true
    dnsPolicy: Default
    enableServiceLinks: true
    nodeName: linux2
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 2000000000
    priorityClassName: system-cluster-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: coredns
    serviceAccountName: coredns
    terminationGracePeriodSeconds: 30
    tolerations:
    - key: CriticalAddonsOnly
      operator: Exists
    - effect: NoSchedule
      key: node-role.kubernetes.io/control-plane
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    volumes:
    - configMap:
        defaultMode: 420
        items:
        - key: Corefile
          path: Corefile
        name: coredns
      name: config-volume
    - name: kube-api-access-7fsht
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T13:27:19Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:26:31Z"
      message: 'containers with unready status: [coredns]'
      reason: ContainersNotReady
      status: "False"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:26:31Z"
      message: 'containers with unready status: [coredns]'
      reason: ContainersNotReady
      status: "False"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T13:27:19Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: containerd://ad2b8604b8a224bbcd7f845c26d8b08b148032727432d9c875f2a117f60945b2
      image: registry.k8s.io/coredns/coredns:v1.10.1
      imageID: registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
      lastState:
        terminated:
          containerID: containerd://ad2b8604b8a224bbcd7f845c26d8b08b148032727432d9c875f2a117f60945b2
          exitCode: 0
          finishedAt: "2023-09-28T17:26:30Z"
          reason: Completed
          startedAt: "2023-09-28T17:25:35Z"
      name: coredns
      ready: false
      restartCount: 34
      started: false
      state:
        waiting:
          message: services have not yet been read at least once, cannot construct
            envvars
          reason: CreateContainerConfigError
    hostIP: 192.168.0.127
    phase: Running
    podIP: 10.32.0.4
    podIPs:
    - ip: 10.32.0.4
    qosClass: Burstable
    startTime: "2023-09-28T13:27:19Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"dnsutils","namespace":"kube-system"},"spec":{"containers":[{"command":["sleep","infinity"],"image":"registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3","imagePullPolicy":"IfNotPresent","name":"dnsutils"}],"restartPolicy":"Always"}}
    creationTimestamp: "2023-09-28T14:47:40Z"
    name: dnsutils
    namespace: kube-system
    resourceVersion: "10316"
    uid: 3be5b14e-6d16-47c4-ab72-5423514e7344
  spec:
    containers:
    - command:
      - sleep
      - infinity
      image: registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3
      imagePullPolicy: IfNotPresent
      name: dnsutils
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-2677n
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    nodeName: linux2
    preemptionPolicy: PreemptLowerPriority
    priority: 0
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: default
    serviceAccountName: default
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    volumes:
    - name: kube-api-access-2677n
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:12:53Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:26:54Z"
      message: 'containers with unready status: [dnsutils]'
      reason: ContainersNotReady
      status: "False"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:26:54Z"
      message: 'containers with unready status: [dnsutils]'
      reason: ContainersNotReady
      status: "False"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:12:53Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: containerd://2b1c660f5bac83ef98baebc75e8244bfc473988484c5761fb1d72e5e52f28f21
      image: registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3
      imageID: registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:8b03e4185ecd305bc9b410faac15d486a3b1ef1946196d429245cdd3c7b152eb
      lastState:
        terminated:
          containerID: containerd://2b1c660f5bac83ef98baebc75e8244bfc473988484c5761fb1d72e5e52f28f21
          exitCode: 137
          finishedAt: "2023-09-28T17:26:54Z"
          reason: Error
          startedAt: "2023-09-28T17:25:41Z"
      name: dnsutils
      ready: false
      restartCount: 1
      started: false
      state:
        waiting:
          message: back-off 2m40s restarting failed container=dnsutils pod=dnsutils_kube-system(3be5b14e-6d16-47c4-ab72-5423514e7344)
          reason: CrashLoopBackOff
    hostIP: 192.168.0.127
    phase: Running
    podIP: 10.32.0.3
    podIPs:
    - ip: 10.32.0.3
    qosClass: BestEffort
    startTime: "2023-09-28T17:12:53Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      kubernetes.io/config.hash: 625fb7ca3d86ceac221099b5fcfe362e
      kubernetes.io/config.mirror: 625fb7ca3d86ceac221099b5fcfe362e
      kubernetes.io/config.seen: "2023-09-28T11:00:31.251471437-04:00"
      kubernetes.io/config.source: file
    creationTimestamp: "2023-09-28T15:00:42Z"
    name: dnsutils-linux2
    namespace: kube-system
    ownerReferences:
    - apiVersion: v1
      controller: true
      kind: Node
      name: linux2
      uid: 8853bae4-e74d-4df4-a99d-4152cdca2d05
    resourceVersion: "10310"
    uid: 9ce56301-58db-4e21-a379-df632d9c5fb8
  spec:
    containers:
    - command:
      - sleep
      - infinity
      image: registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3
      imagePullPolicy: IfNotPresent
      name: dnsutils
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    nodeName: linux2
    preemptionPolicy: PreemptLowerPriority
    priority: 0
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      operator: Exists
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:25:51Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:28:53Z"
      message: 'containers with unready status: [dnsutils]'
      reason: ContainersNotReady
      status: "False"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:28:53Z"
      message: 'containers with unready status: [dnsutils]'
      reason: ContainersNotReady
      status: "False"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:25:51Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: containerd://fac99ef685ffebe8e4bf5e6199e12015151439cd763d6214b0d9e76f43c311bd
      image: registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3
      imageID: registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:8b03e4185ecd305bc9b410faac15d486a3b1ef1946196d429245cdd3c7b152eb
      lastState: {}
      name: dnsutils
      ready: false
      restartCount: 14
      started: false
      state:
        terminated:
          containerID: containerd://fac99ef685ffebe8e4bf5e6199e12015151439cd763d6214b0d9e76f43c311bd
          exitCode: 137
          finishedAt: "2023-09-28T17:28:52Z"
          reason: Error
          startedAt: "2023-09-28T17:27:11Z"
    hostIP: 192.168.0.127
    phase: Running
    podIP: 10.32.0.2
    podIPs:
    - ip: 10.32.0.2
    qosClass: BestEffort
    startTime: "2023-09-28T17:25:51Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.0.127:2379
      kubernetes.io/config.hash: 048d73bfa2e588fd9c90e112a955048d
      kubernetes.io/config.mirror: 048d73bfa2e588fd9c90e112a955048d
      kubernetes.io/config.seen: "2023-09-28T09:11:32.799879329-04:00"
      kubernetes.io/config.source: file
    creationTimestamp: "2023-09-28T13:11:36Z"
    labels:
      component: etcd
      tier: control-plane
    name: etcd-linux2
    namespace: kube-system
    ownerReferences:
    - apiVersion: v1
      controller: true
      kind: Node
      name: linux2
      uid: 8853bae4-e74d-4df4-a99d-4152cdca2d05
    resourceVersion: "10036"
    uid: 9beb94f3-90b0-47a8-ab4f-a578b50fee74
  spec:
    containers:
    - command:
      - etcd
      - --advertise-client-urls=https://192.168.0.127:2379
      - --cert-file=/etc/kubernetes/pki/etcd/server.crt
      - --client-cert-auth=true
      - --data-dir=/var/lib/etcd
      - --experimental-initial-corrupt-check=true
      - --experimental-watch-progress-notify-interval=5s
      - --initial-advertise-peer-urls=https://192.168.0.127:2380
      - --initial-cluster=linux2=https://192.168.0.127:2380
      - --key-file=/etc/kubernetes/pki/etcd/server.key
      - --listen-client-urls=https://127.0.0.1:2379,https://192.168.0.127:2379
      - --listen-metrics-urls=http://127.0.0.1:2381
      - --listen-peer-urls=https://192.168.0.127:2380
      - --name=linux2
      - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
      - --peer-client-cert-auth=true
      - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
      - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
      - --snapshot-count=10000
      - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
      image: registry.k8s.io/etcd:3.5.9-0
      imagePullPolicy: IfNotPresent
      livenessProbe:
        failureThreshold: 8
        httpGet:
          host: 127.0.0.1
          path: /health?exclude=NOSPACE&serializable=true
          port: 2381
          scheme: HTTP
        initialDelaySeconds: 10
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 15
      name: etcd
      resources:
        requests:
          cpu: 100m
          memory: 100Mi
      startupProbe:
        failureThreshold: 24
        httpGet:
          host: 127.0.0.1
          path: /health?serializable=false
          port: 2381
          scheme: HTTP
        initialDelaySeconds: 10
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 15
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/lib/etcd
        name: etcd-data
      - mountPath: /etc/kubernetes/pki/etcd
        name: etcd-certs
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    hostNetwork: true
    nodeName: linux2
    preemptionPolicy: PreemptLowerPriority
    priority: 2000001000
    priorityClassName: system-node-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      seccompProfile:
        type: RuntimeDefault
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      operator: Exists
    volumes:
    - hostPath:
        path: /etc/kubernetes/pki/etcd
        type: DirectoryOrCreate
      name: etcd-certs
    - hostPath:
        path: /var/lib/etcd
        type: DirectoryOrCreate
      name: etcd-data
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:25:51Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:26:04Z"
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:26:04Z"
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:25:51Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: containerd://54c5e84dc8b33c38359e43c97ada9402ccb1f4bd89f33688c4f427e60e784160
      image: registry.k8s.io/etcd:3.5.9-0
      imageID: registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
      lastState:
        terminated:
          containerID: containerd://2c4f89932d0a3a717dc8bf699abace672a9b31240db5c47966a6832159d9e97b
          exitCode: 0
          finishedAt: "2023-09-28T17:25:52Z"
          reason: Completed
          startedAt: "2023-09-28T17:24:42Z"
      name: etcd
      ready: true
      restartCount: 204
      started: true
      state:
        running:
          startedAt: "2023-09-28T17:25:52Z"
    hostIP: 192.168.0.127
    phase: Running
    podIP: 192.168.0.127
    podIPs:
    - ip: 192.168.0.127
    qosClass: Burstable
    startTime: "2023-09-28T17:25:51Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.0.127:6443
      kubernetes.io/config.hash: a77ed96c23ec724ba767597fa483ba34
      kubernetes.io/config.mirror: a77ed96c23ec724ba767597fa483ba34
      kubernetes.io/config.seen: "2023-09-28T09:04:41.076894173-04:00"
      kubernetes.io/config.source: file
    creationTimestamp: "2023-09-28T13:05:13Z"
    labels:
      component: kube-apiserver
      tier: control-plane
    name: kube-apiserver-linux2
    namespace: kube-system
    ownerReferences:
    - apiVersion: v1
      controller: true
      kind: Node
      name: linux2
      uid: 8853bae4-e74d-4df4-a99d-4152cdca2d05
    resourceVersion: "10321"
    uid: 78c459c6-fab3-4f39-bc79-cd9357f2ec77
  spec:
    containers:
    - command:
      - kube-apiserver
      - --advertise-address=192.168.0.127
      - --allow-privileged=true
      - --authorization-mode=Node,RBAC
      - --client-ca-file=/etc/kubernetes/pki/ca.crt
      - --enable-admission-plugins=NodeRestriction
      - --enable-bootstrap-token-auth=true
      - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
      - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
      - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
      - --etcd-servers=https://127.0.0.1:2379
      - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
      - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
      - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
      - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
      - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
      - --requestheader-allowed-names=front-proxy-client
      - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
      - --requestheader-extra-headers-prefix=X-Remote-Extra-
      - --requestheader-group-headers=X-Remote-Group
      - --requestheader-username-headers=X-Remote-User
      - --secure-port=6443
      - --service-account-issuer=https://kubernetes.default.svc.cluster.local
      - --service-account-key-file=/etc/kubernetes/pki/sa.pub
      - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
      - --service-cluster-ip-range=10.96.0.0/12
      - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
      - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
      image: registry.k8s.io/kube-apiserver:v1.28.2
      imagePullPolicy: IfNotPresent
      livenessProbe:
        failureThreshold: 8
        httpGet:
          host: 192.168.0.127
          path: /livez
          port: 6443
          scheme: HTTPS
        initialDelaySeconds: 10
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 15
      name: kube-apiserver
      readinessProbe:
        failureThreshold: 3
        httpGet:
          host: 192.168.0.127
          path: /readyz
          port: 6443
          scheme: HTTPS
        periodSeconds: 1
        successThreshold: 1
        timeoutSeconds: 15
      resources:
        requests:
          cpu: 250m
      startupProbe:
        failureThreshold: 24
        httpGet:
          host: 192.168.0.127
          path: /livez
          port: 6443
          scheme: HTTPS
        initialDelaySeconds: 10
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 15
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /etc/ssl/certs
        name: ca-certs
        readOnly: true
      - mountPath: /etc/pki
        name: etc-pki
        readOnly: true
      - mountPath: /etc/kubernetes/pki
        name: k8s-certs
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    hostNetwork: true
    nodeName: linux2
    preemptionPolicy: PreemptLowerPriority
    priority: 2000001000
    priorityClassName: system-node-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      seccompProfile:
        type: RuntimeDefault
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      operator: Exists
    volumes:
    - hostPath:
        path: /etc/ssl/certs
        type: DirectoryOrCreate
      name: ca-certs
    - hostPath:
        path: /etc/pki
        type: DirectoryOrCreate
      name: etc-pki
    - hostPath:
        path: /etc/kubernetes/pki
        type: DirectoryOrCreate
      name: k8s-certs
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:25:51Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:28:16Z"
      message: 'containers with unready status: [kube-apiserver]'
      reason: ContainersNotReady
      status: "False"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:28:16Z"
      message: 'containers with unready status: [kube-apiserver]'
      reason: ContainersNotReady
      status: "False"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:25:51Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: containerd://52cfbac58baa53ad9f2b9adaf72286a28b69ccf9972f2faa92ce1fbaccee6e1e
      image: registry.k8s.io/kube-apiserver:v1.28.2
      imageID: registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c
      lastState:
        terminated:
          containerID: containerd://3504f51dc03c8c7723567e0dfcccf27ba7bb4d3f9255503b4d524504925a9dfb
          exitCode: 137
          finishedAt: "2023-09-28T17:28:44Z"
          reason: Error
          startedAt: "2023-09-28T17:28:02Z"
      name: kube-apiserver
      ready: false
      restartCount: 316
      started: false
      state:
        running:
          startedAt: "2023-09-28T17:29:34Z"
    hostIP: 192.168.0.127
    phase: Running
    podIP: 192.168.0.127
    podIPs:
    - ip: 192.168.0.127
    qosClass: Burstable
    startTime: "2023-09-28T17:25:51Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      kubernetes.io/config.hash: c5fcfba55d727d51717e039befe2caf4
      kubernetes.io/config.mirror: c5fcfba55d727d51717e039befe2caf4
      kubernetes.io/config.seen: "2023-09-28T09:11:32.799887076-04:00"
      kubernetes.io/config.source: file
    creationTimestamp: "2023-09-28T13:11:36Z"
    labels:
      component: kube-controller-manager
      tier: control-plane
    name: kube-controller-manager-linux2
    namespace: kube-system
    ownerReferences:
    - apiVersion: v1
      controller: true
      kind: Node
      name: linux2
      uid: 8853bae4-e74d-4df4-a99d-4152cdca2d05
    resourceVersion: "10319"
    uid: 4b850c48-87c1-495c-8ca6-31c4535f2d65
  spec:
    containers:
    - command:
      - kube-controller-manager
      - --allocate-node-cidrs=true
      - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
      - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
      - --bind-address=127.0.0.1
      - --client-ca-file=/etc/kubernetes/pki/ca.crt
      - --cluster-cidr=10.244.0.0/16
      - --cluster-name=kubernetes
      - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
      - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
      - --controllers=*,bootstrapsigner,tokencleaner
      - --kubeconfig=/etc/kubernetes/controller-manager.conf
      - --leader-elect=true
      - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
      - --root-ca-file=/etc/kubernetes/pki/ca.crt
      - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
      - --service-cluster-ip-range=10.96.0.0/12
      - --use-service-account-credentials=true
      image: registry.k8s.io/kube-controller-manager:v1.28.2
      imagePullPolicy: IfNotPresent
      livenessProbe:
        failureThreshold: 8
        httpGet:
          host: 127.0.0.1
          path: /healthz
          port: 10257
          scheme: HTTPS
        initialDelaySeconds: 10
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 15
      name: kube-controller-manager
      resources:
        requests:
          cpu: 200m
      startupProbe:
        failureThreshold: 24
        httpGet:
          host: 127.0.0.1
          path: /healthz
          port: 10257
          scheme: HTTPS
        initialDelaySeconds: 10
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 15
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /etc/ssl/certs
        name: ca-certs
        readOnly: true
      - mountPath: /etc/pki
        name: etc-pki
        readOnly: true
      - mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
        name: flexvolume-dir
      - mountPath: /etc/kubernetes/pki
        name: k8s-certs
        readOnly: true
      - mountPath: /etc/kubernetes/controller-manager.conf
        name: kubeconfig
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    hostNetwork: true
    nodeName: linux2
    preemptionPolicy: PreemptLowerPriority
    priority: 2000001000
    priorityClassName: system-node-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      seccompProfile:
        type: RuntimeDefault
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      operator: Exists
    volumes:
    - hostPath:
        path: /etc/ssl/certs
        type: DirectoryOrCreate
      name: ca-certs
    - hostPath:
        path: /etc/pki
        type: DirectoryOrCreate
      name: etc-pki
    - hostPath:
        path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
        type: DirectoryOrCreate
      name: flexvolume-dir
    - hostPath:
        path: /etc/kubernetes/pki
        type: DirectoryOrCreate
      name: k8s-certs
    - hostPath:
        path: /etc/kubernetes/controller-manager.conf
        type: FileOrCreate
      name: kubeconfig
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:25:51Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:29:35Z"
      message: 'containers with unready status: [kube-controller-manager]'
      reason: ContainersNotReady
      status: "False"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:29:35Z"
      message: 'containers with unready status: [kube-controller-manager]'
      reason: ContainersNotReady
      status: "False"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:25:51Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: containerd://4046a542e78b5dd336586fe32276419eacad703128cc588323246e9742a3d7c8
      image: registry.k8s.io/kube-controller-manager:v1.28.2
      imageID: registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4
      lastState:
        terminated:
          containerID: containerd://4046a542e78b5dd336586fe32276419eacad703128cc588323246e9742a3d7c8
          exitCode: 2
          finishedAt: "2023-09-28T17:29:34Z"
          reason: Error
          startedAt: "2023-09-28T17:29:19Z"
      name: kube-controller-manager
      ready: false
      restartCount: 147
      started: false
      state:
        waiting:
          message: back-off 40s restarting failed container=kube-controller-manager
            pod=kube-controller-manager-linux2_kube-system(c5fcfba55d727d51717e039befe2caf4)
          reason: CrashLoopBackOff
    hostIP: 192.168.0.127
    phase: Running
    podIP: 192.168.0.127
    podIPs:
    - ip: 192.168.0.127
    qosClass: Burstable
    startTime: "2023-09-28T17:25:51Z"
- apiVersion: v1
  kind: Pod
  metadata:
    creationTimestamp: "2023-09-28T13:05:29Z"
    generateName: kube-proxy-
    labels:
      controller-revision-hash: 5cbdb8dcbd
      k8s-app: kube-proxy
      pod-template-generation: "1"
    name: kube-proxy-p7qc2
    namespace: kube-system
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: DaemonSet
      name: kube-proxy
      uid: ba1c5bb0-d38c-49d3-98bb-87cfad82ea96
    resourceVersion: "10408"
    uid: 9cb83842-8961-427a-8f0b-80b9dca5f501
  spec:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchFields:
            - key: metadata.name
              operator: In
              values:
              - linux2
    containers:
    - command:
      - /usr/local/bin/kube-proxy
      - --config=/var/lib/kube-proxy/config.conf
      - --hostname-override=$(NODE_NAME)
      env:
      - name: NODE_NAME
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: spec.nodeName
      image: registry.k8s.io/kube-proxy:v1.28.2
      imagePullPolicy: IfNotPresent
      name: kube-proxy
      resources: {}
      securityContext:
        privileged: true
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/lib/kube-proxy
        name: kube-proxy
      - mountPath: /run/xtables.lock
        name: xtables-lock
      - mountPath: /lib/modules
        name: lib-modules
        readOnly: true
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-5dt5w
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    hostNetwork: true
    nodeName: linux2
    nodeSelector:
      kubernetes.io/os: linux
    preemptionPolicy: PreemptLowerPriority
    priority: 2000001000
    priorityClassName: system-node-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: kube-proxy
    serviceAccountName: kube-proxy
    terminationGracePeriodSeconds: 30
    tolerations:
    - operator: Exists
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
    - effect: NoSchedule
      key: node.kubernetes.io/disk-pressure
      operator: Exists
    - effect: NoSchedule
      key: node.kubernetes.io/memory-pressure
      operator: Exists
    - effect: NoSchedule
      key: node.kubernetes.io/pid-pressure
      operator: Exists
    - effect: NoSchedule
      key: node.kubernetes.io/unschedulable
      operator: Exists
    - effect: NoSchedule
      key: node.kubernetes.io/network-unavailable
      operator: Exists
    volumes:
    - configMap:
        defaultMode: 420
        name: kube-proxy
      name: kube-proxy
    - hostPath:
        path: /run/xtables.lock
        type: FileOrCreate
      name: xtables-lock
    - hostPath:
        path: /lib/modules
        type: ""
      name: lib-modules
    - name: kube-api-access-5dt5w
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T13:05:29Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:26:25Z"
      message: 'containers with unready status: [kube-proxy]'
      reason: ContainersNotReady
      status: "False"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:26:25Z"
      message: 'containers with unready status: [kube-proxy]'
      reason: ContainersNotReady
      status: "False"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T13:05:29Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: containerd://41a6c5e5d5cc58e01761420de3d9ab5294a614eb4aac3b66520bd2aca274f4d9
      image: registry.k8s.io/kube-proxy:v1.28.2
      imageID: registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf
      lastState:
        terminated:
          containerID: containerd://41a6c5e5d5cc58e01761420de3d9ab5294a614eb4aac3b66520bd2aca274f4d9
          exitCode: 2
          finishedAt: "2023-09-28T17:26:24Z"
          reason: Error
          startedAt: "2023-09-28T17:25:42Z"
      name: kube-proxy
      ready: false
      restartCount: 90
      started: false
      state:
        waiting:
          message: back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-p7qc2_kube-system(9cb83842-8961-427a-8f0b-80b9dca5f501)
          reason: CrashLoopBackOff
    hostIP: 192.168.0.127
    phase: Running
    podIP: 192.168.0.127
    podIPs:
    - ip: 192.168.0.127
    qosClass: BestEffort
    startTime: "2023-09-28T13:05:29Z"
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      kubernetes.io/config.hash: a2fe6507952c440d9f78631b15f1732d
      kubernetes.io/config.mirror: a2fe6507952c440d9f78631b15f1732d
      kubernetes.io/config.seen: "2023-09-28T09:04:35.903648910-04:00"
      kubernetes.io/config.source: file
    creationTimestamp: "2023-09-28T13:04:39Z"
    labels:
      component: kube-scheduler
      tier: control-plane
    name: kube-scheduler-linux2
    namespace: kube-system
    ownerReferences:
    - apiVersion: v1
      controller: true
      kind: Node
      name: linux2
      uid: 8853bae4-e74d-4df4-a99d-4152cdca2d05
    resourceVersion: "10317"
    uid: 030d387b-f8c3-42ca-b683-4b8194d61b49
  spec:
    containers:
    - command:
      - kube-scheduler
      - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
      - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
      - --bind-address=127.0.0.1
      - --kubeconfig=/etc/kubernetes/scheduler.conf
      - --leader-elect=true
      image: registry.k8s.io/kube-scheduler:v1.28.2
      imagePullPolicy: IfNotPresent
      livenessProbe:
        failureThreshold: 8
        httpGet:
          host: 127.0.0.1
          path: /healthz
          port: 10259
          scheme: HTTPS
        initialDelaySeconds: 10
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 15
      name: kube-scheduler
      resources:
        requests:
          cpu: 100m
      startupProbe:
        failureThreshold: 24
        httpGet:
          host: 127.0.0.1
          path: /healthz
          port: 10259
          scheme: HTTPS
        initialDelaySeconds: 10
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 15
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /etc/kubernetes/scheduler.conf
        name: kubeconfig
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    hostNetwork: true
    nodeName: linux2
    preemptionPolicy: PreemptLowerPriority
    priority: 2000001000
    priorityClassName: system-node-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      seccompProfile:
        type: RuntimeDefault
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      operator: Exists
    volumes:
    - hostPath:
        path: /etc/kubernetes/scheduler.conf
        type: FileOrCreate
      name: kubeconfig
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:25:51Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:29:14Z"
      message: 'containers with unready status: [kube-scheduler]'
      reason: ContainersNotReady
      status: "False"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:29:14Z"
      message: 'containers with unready status: [kube-scheduler]'
      reason: ContainersNotReady
      status: "False"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:25:51Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: containerd://441a237cd5138712b9062248f87c8a6fb5cf047377aa3d2b7a2ee555d765d2d4
      image: registry.k8s.io/kube-scheduler:v1.28.2
      imageID: registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab
      lastState:
        terminated:
          containerID: containerd://14b59a4d913bc481b1eb7f7eeabfba0400f8420fdd28d41ebf8020b864522fba
          exitCode: 0
          finishedAt: "2023-09-28T17:29:13Z"
          reason: Completed
          startedAt: "2023-09-28T17:27:43Z"
      name: kube-scheduler
      ready: false
      restartCount: 215
      started: false
      state:
        running:
          startedAt: "2023-09-28T17:29:34Z"
    hostIP: 192.168.0.127
    phase: Running
    podIP: 192.168.0.127
    podIPs:
    - ip: 192.168.0.127
    qosClass: Burstable
    startTime: "2023-09-28T17:25:51Z"
- apiVersion: v1
  kind: Pod
  metadata:
    creationTimestamp: "2023-09-28T13:15:36Z"
    generateName: weave-net-
    labels:
      controller-revision-hash: 5dbff4c97
      name: weave-net
      pod-template-generation: "1"
    name: weave-net-zs5kj
    namespace: kube-system
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: DaemonSet
      name: weave-net
      uid: d76aaa7b-f002-448d-8b0f-564c717d5801
    resourceVersion: "10046"
    uid: baa00e24-32dd-4a03-b4eb-4a46fa7670da
  spec:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchFields:
            - key: metadata.name
              operator: In
              values:
              - linux2
    containers:
    - command:
      - /home/weave/launch.sh
      env:
      - name: INIT_CONTAINER
        value: "true"
      - name: HOSTNAME
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: spec.nodeName
      image: weaveworks/weave-kube:latest
      imagePullPolicy: Always
      name: weave
      readinessProbe:
        failureThreshold: 3
        httpGet:
          host: 127.0.0.1
          path: /status
          port: 6784
          scheme: HTTP
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 1
      resources:
        requests:
          cpu: 50m
      securityContext:
        privileged: true
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /weavedb
        name: weavedb
      - mountPath: /host/var/lib/dbus
        name: dbus
        readOnly: true
      - mountPath: /host/etc/machine-id
        name: cni-machine-id
        readOnly: true
      - mountPath: /run/xtables.lock
        name: xtables-lock
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-79gnl
        readOnly: true
    - env:
      - name: HOSTNAME
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: spec.nodeName
      image: weaveworks/weave-npc:latest
      imagePullPolicy: Always
      name: weave-npc
      resources:
        requests:
          cpu: 50m
      securityContext:
        privileged: true
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /run/xtables.lock
        name: xtables-lock
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-79gnl
        readOnly: true
    dnsPolicy: ClusterFirstWithHostNet
    enableServiceLinks: true
    hostNetwork: true
    initContainers:
    - command:
      - /home/weave/init.sh
      image: weaveworks/weave-kube:latest
      imagePullPolicy: Always
      name: weave-init
      resources: {}
      securityContext:
        privileged: true
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /host/opt
        name: cni-bin
      - mountPath: /host/home
        name: cni-bin2
      - mountPath: /host/etc
        name: cni-conf
      - mountPath: /lib/modules
        name: lib-modules
      - mountPath: /run/xtables.lock
        name: xtables-lock
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-79gnl
        readOnly: true
    nodeName: linux2
    preemptionPolicy: PreemptLowerPriority
    priority: 2000001000
    priorityClassName: system-node-critical
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      seLinuxOptions: {}
    serviceAccount: weave-net
    serviceAccountName: weave-net
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoSchedule
      operator: Exists
    - effect: NoExecute
      operator: Exists
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
    - effect: NoSchedule
      key: node.kubernetes.io/disk-pressure
      operator: Exists
    - effect: NoSchedule
      key: node.kubernetes.io/memory-pressure
      operator: Exists
    - effect: NoSchedule
      key: node.kubernetes.io/pid-pressure
      operator: Exists
    - effect: NoSchedule
      key: node.kubernetes.io/unschedulable
      operator: Exists
    - effect: NoSchedule
      key: node.kubernetes.io/network-unavailable
      operator: Exists
    volumes:
    - hostPath:
        path: /var/lib/weave
        type: ""
      name: weavedb
    - hostPath:
        path: /opt
        type: ""
      name: cni-bin
    - hostPath:
        path: /home
        type: ""
      name: cni-bin2
    - hostPath:
        path: /etc
        type: ""
      name: cni-conf
    - hostPath:
        path: /etc/machine-id
        type: ""
      name: cni-machine-id
    - hostPath:
        path: /var/lib/dbus
        type: ""
      name: dbus
    - hostPath:
        path: /lib/modules
        type: ""
      name: lib-modules
    - hostPath:
        path: /run/xtables.lock
        type: FileOrCreate
      name: xtables-lock
    - name: kube-api-access-79gnl
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T13:26:32Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:26:24Z"
      message: 'containers with unready status: [weave weave-npc]'
      reason: ContainersNotReady
      status: "False"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T17:26:24Z"
      message: 'containers with unready status: [weave weave-npc]'
      reason: ContainersNotReady
      status: "False"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2023-09-28T13:15:36Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: containerd://f36725eec6a2ef0868a0ab69ef27cc6641f361a59aafc2fef1e6504392b8eb5c
      image: docker.io/weaveworks/weave-kube:latest
      imageID: docker.io/weaveworks/weave-kube@sha256:35827a9c549c095f0e9d1cf8b35d8f27ae2c76e31bc6f7f3c0bc95911d5accea
      lastState: {}
      name: weave
      ready: false
      restartCount: 56
      started: false
      state:
        terminated:
          containerID: containerd://f36725eec6a2ef0868a0ab69ef27cc6641f361a59aafc2fef1e6504392b8eb5c
          exitCode: 137
          finishedAt: "2023-09-28T17:26:55Z"
          reason: Error
          startedAt: "2023-09-28T17:25:28Z"
    - containerID: containerd://ddd1db61a9c46e9ae205df16fdf714d5baaddd4932a81a8878c3248473940efc
      image: docker.io/weaveworks/weave-npc:latest
      imageID: docker.io/weaveworks/weave-npc@sha256:062832fd25b5e9e16650e618f26bba1409a7b3bf2c3903e1b369d788abc63aef
      lastState:
        terminated:
          containerID: containerd://ddd1db61a9c46e9ae205df16fdf714d5baaddd4932a81a8878c3248473940efc
          exitCode: 1
          finishedAt: "2023-09-28T17:25:29Z"
          reason: Error
          startedAt: "2023-09-28T17:25:28Z"
      name: weave-npc
      ready: false
      restartCount: 52
      started: false
      state:
        waiting:
          message: services have not yet been read at least once, cannot construct
            envvars
          reason: CreateContainerConfigError
    hostIP: 192.168.0.127
    initContainerStatuses:
    - image: weaveworks/weave-kube:latest
      imageID: ""
      lastState:
        terminated:
          containerID: containerd://a07d3b5e8a867190a0fdf8df2c80f68332ba8f6af179fbcd0f903a1334c3fa0b
          exitCode: 0
          finishedAt: "2023-09-28T17:15:49Z"
          reason: Completed
          startedAt: "2023-09-28T17:15:48Z"
      name: weave-init
      ready: false
      restartCount: 1
      started: false
      state:
        waiting:
          message: services have not yet been read at least once, cannot construct
            envvars
          reason: CreateContainerConfigError
    phase: Running
    podIP: 192.168.0.127
    podIPs:
    - ip: 192.168.0.127
    qosClass: Burstable
    startTime: "2023-09-28T13:15:36Z"
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      prometheus.io/port: "9153"
      prometheus.io/scrape: "true"
    creationTimestamp: "2023-09-28T13:04:41Z"
    labels:
      k8s-app: kube-dns
      kubernetes.io/cluster-service: "true"
      kubernetes.io/name: CoreDNS
    name: kube-dns
    namespace: kube-system
    resourceVersion: "221"
    uid: 23db1aef-4844-4d38-958b-73f6f862491b
  spec:
    clusterIP: 10.96.0.10
    clusterIPs:
    - 10.96.0.10
    internalTrafficPolicy: Cluster
    ipFamilies:
    - IPv4
    ipFamilyPolicy: SingleStack
    ports:
    - name: dns
      port: 53
      protocol: UDP
      targetPort: 53
    - name: dns-tcp
      port: 53
      protocol: TCP
      targetPort: 53
    - name: metrics
      port: 9153
      protocol: TCP
      targetPort: 9153
    selector:
      k8s-app: kube-dns
    sessionAffinity: None
    type: ClusterIP
  status:
    loadBalancer: {}
- apiVersion: apps/v1
  kind: DaemonSet
  metadata:
    annotations:
      deprecated.daemonset.template.generation: "1"
    creationTimestamp: "2023-09-28T13:04:41Z"
    generation: 1
    labels:
      k8s-app: kube-proxy
    name: kube-proxy
    namespace: kube-system
    resourceVersion: "8195"
    uid: ba1c5bb0-d38c-49d3-98bb-87cfad82ea96
  spec:
    revisionHistoryLimit: 10
    selector:
      matchLabels:
        k8s-app: kube-proxy
    template:
      metadata:
        creationTimestamp: null
        labels:
          k8s-app: kube-proxy
      spec:
        containers:
        - command:
          - /usr/local/bin/kube-proxy
          - --config=/var/lib/kube-proxy/config.conf
          - --hostname-override=$(NODE_NAME)
          env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: spec.nodeName
          image: registry.k8s.io/kube-proxy:v1.28.2
          imagePullPolicy: IfNotPresent
          name: kube-proxy
          resources: {}
          securityContext:
            privileged: true
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /var/lib/kube-proxy
            name: kube-proxy
          - mountPath: /run/xtables.lock
            name: xtables-lock
          - mountPath: /lib/modules
            name: lib-modules
            readOnly: true
        dnsPolicy: ClusterFirst
        hostNetwork: true
        nodeSelector:
          kubernetes.io/os: linux
        priorityClassName: system-node-critical
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext: {}
        serviceAccount: kube-proxy
        serviceAccountName: kube-proxy
        terminationGracePeriodSeconds: 30
        tolerations:
        - operator: Exists
        volumes:
        - configMap:
            defaultMode: 420
            name: kube-proxy
          name: kube-proxy
        - hostPath:
            path: /run/xtables.lock
            type: FileOrCreate
          name: xtables-lock
        - hostPath:
            path: /lib/modules
            type: ""
          name: lib-modules
    updateStrategy:
      rollingUpdate:
        maxSurge: 0
        maxUnavailable: 1
      type: RollingUpdate
  status:
    currentNumberScheduled: 1
    desiredNumberScheduled: 1
    numberMisscheduled: 0
    numberReady: 0
    numberUnavailable: 1
    observedGeneration: 1
    updatedNumberScheduled: 1
- apiVersion: apps/v1
  kind: DaemonSet
  metadata:
    annotations:
      deprecated.daemonset.template.generation: "1"
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{},"labels":{"name":"weave-net"},"name":"weave-net","namespace":"kube-system"},"spec":{"minReadySeconds":5,"selector":{"matchLabels":{"name":"weave-net"}},"template":{"metadata":{"labels":{"name":"weave-net"}},"spec":{"containers":[{"command":["/home/weave/launch.sh"],"env":[{"name":"INIT_CONTAINER","value":"true"},{"name":"HOSTNAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"spec.nodeName"}}}],"image":"weaveworks/weave-kube:latest","imagePullPolicy":"Always","name":"weave","readinessProbe":{"httpGet":{"host":"127.0.0.1","path":"/status","port":6784}},"resources":{"requests":{"cpu":"50m"}},"securityContext":{"privileged":true},"volumeMounts":[{"mountPath":"/weavedb","name":"weavedb"},{"mountPath":"/host/var/lib/dbus","name":"dbus","readOnly":true},{"mountPath":"/host/etc/machine-id","name":"cni-machine-id","readOnly":true},{"mountPath":"/run/xtables.lock","name":"xtables-lock","readOnly":false}]},{"env":[{"name":"HOSTNAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"spec.nodeName"}}}],"image":"weaveworks/weave-npc:latest","imagePullPolicy":"Always","name":"weave-npc","resources":{"requests":{"cpu":"50m"}},"securityContext":{"privileged":true},"volumeMounts":[{"mountPath":"/run/xtables.lock","name":"xtables-lock","readOnly":false}]}],"dnsPolicy":"ClusterFirstWithHostNet","hostNetwork":true,"hostPID":false,"initContainers":[{"command":["/home/weave/init.sh"],"env":null,"image":"weaveworks/weave-kube:latest","imagePullPolicy":"Always","name":"weave-init","securityContext":{"privileged":true},"volumeMounts":[{"mountPath":"/host/opt","name":"cni-bin"},{"mountPath":"/host/home","name":"cni-bin2"},{"mountPath":"/host/etc","name":"cni-conf"},{"mountPath":"/lib/modules","name":"lib-modules"},{"mountPath":"/run/xtables.lock","name":"xtables-lock","readOnly":false}]}],"priorityClassName":"system-node-critical","restartPolicy":"Always","securityContext":{"seLinuxOptions":{}},"serviceAccountName":"weave-net","tolerations":[{"effect":"NoSchedule","operator":"Exists"},{"effect":"NoExecute","operator":"Exists"}],"volumes":[{"hostPath":{"path":"/var/lib/weave"},"name":"weavedb"},{"hostPath":{"path":"/opt"},"name":"cni-bin"},{"hostPath":{"path":"/home"},"name":"cni-bin2"},{"hostPath":{"path":"/etc"},"name":"cni-conf"},{"hostPath":{"path":"/etc/machine-id"},"name":"cni-machine-id"},{"hostPath":{"path":"/var/lib/dbus"},"name":"dbus"},{"hostPath":{"path":"/lib/modules"},"name":"lib-modules"},{"hostPath":{"path":"/run/xtables.lock","type":"FileOrCreate"},"name":"xtables-lock"}]}},"updateStrategy":{"type":"RollingUpdate"}}}
    creationTimestamp: "2023-09-28T13:10:50Z"
    generation: 1
    labels:
      name: weave-net
    name: weave-net
    namespace: kube-system
    resourceVersion: "8378"
    uid: d76aaa7b-f002-448d-8b0f-564c717d5801
  spec:
    minReadySeconds: 5
    revisionHistoryLimit: 10
    selector:
      matchLabels:
        name: weave-net
    template:
      metadata:
        creationTimestamp: null
        labels:
          name: weave-net
      spec:
        containers:
        - command:
          - /home/weave/launch.sh
          env:
          - name: INIT_CONTAINER
            value: "true"
          - name: HOSTNAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: spec.nodeName
          image: weaveworks/weave-kube:latest
          imagePullPolicy: Always
          name: weave
          readinessProbe:
            failureThreshold: 3
            httpGet:
              host: 127.0.0.1
              path: /status
              port: 6784
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          resources:
            requests:
              cpu: 50m
          securityContext:
            privileged: true
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /weavedb
            name: weavedb
          - mountPath: /host/var/lib/dbus
            name: dbus
            readOnly: true
          - mountPath: /host/etc/machine-id
            name: cni-machine-id
            readOnly: true
          - mountPath: /run/xtables.lock
            name: xtables-lock
        - env:
          - name: HOSTNAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: spec.nodeName
          image: weaveworks/weave-npc:latest
          imagePullPolicy: Always
          name: weave-npc
          resources:
            requests:
              cpu: 50m
          securityContext:
            privileged: true
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /run/xtables.lock
            name: xtables-lock
        dnsPolicy: ClusterFirstWithHostNet
        hostNetwork: true
        initContainers:
        - command:
          - /home/weave/init.sh
          image: weaveworks/weave-kube:latest
          imagePullPolicy: Always
          name: weave-init
          resources: {}
          securityContext:
            privileged: true
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /host/opt
            name: cni-bin
          - mountPath: /host/home
            name: cni-bin2
          - mountPath: /host/etc
            name: cni-conf
          - mountPath: /lib/modules
            name: lib-modules
          - mountPath: /run/xtables.lock
            name: xtables-lock
        priorityClassName: system-node-critical
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext:
          seLinuxOptions: {}
        serviceAccount: weave-net
        serviceAccountName: weave-net
        terminationGracePeriodSeconds: 30
        tolerations:
        - effect: NoSchedule
          operator: Exists
        - effect: NoExecute
          operator: Exists
        volumes:
        - hostPath:
            path: /var/lib/weave
            type: ""
          name: weavedb
        - hostPath:
            path: /opt
            type: ""
          name: cni-bin
        - hostPath:
            path: /home
            type: ""
          name: cni-bin2
        - hostPath:
            path: /etc
            type: ""
          name: cni-conf
        - hostPath:
            path: /etc/machine-id
            type: ""
          name: cni-machine-id
        - hostPath:
            path: /var/lib/dbus
            type: ""
          name: dbus
        - hostPath:
            path: /lib/modules
            type: ""
          name: lib-modules
        - hostPath:
            path: /run/xtables.lock
            type: FileOrCreate
          name: xtables-lock
    updateStrategy:
      rollingUpdate:
        maxSurge: 0
        maxUnavailable: 1
      type: RollingUpdate
  status:
    currentNumberScheduled: 1
    desiredNumberScheduled: 1
    numberMisscheduled: 0
    numberReady: 0
    numberUnavailable: 1
    observedGeneration: 1
    updatedNumberScheduled: 1
- apiVersion: apps/v1
  kind: Deployment
  metadata:
    annotations:
      deployment.kubernetes.io/revision: "1"
    creationTimestamp: "2023-09-28T13:04:41Z"
    generation: 1
    labels:
      k8s-app: kube-dns
    name: coredns
    namespace: kube-system
    resourceVersion: "8572"
    uid: b6f55086-13cf-4abb-aefe-3736940f9d1d
  spec:
    progressDeadlineSeconds: 600
    replicas: 2
    revisionHistoryLimit: 10
    selector:
      matchLabels:
        k8s-app: kube-dns
    strategy:
      rollingUpdate:
        maxSurge: 25%
        maxUnavailable: 1
      type: RollingUpdate
    template:
      metadata:
        creationTimestamp: null
        labels:
          k8s-app: kube-dns
      spec:
        affinity:
          podAntiAffinity:
            preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchExpressions:
                  - key: k8s-app
                    operator: In
                    values:
                    - kube-dns
                topologyKey: kubernetes.io/hostname
              weight: 100
        containers:
        - args:
          - -conf
          - /etc/coredns/Corefile
          image: registry.k8s.io/coredns/coredns:v1.10.1
          imagePullPolicy: IfNotPresent
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /health
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 60
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
          name: coredns
          ports:
          - containerPort: 53
            name: dns
            protocol: UDP
          - containerPort: 53
            name: dns-tcp
            protocol: TCP
          - containerPort: 9153
            name: metrics
            protocol: TCP
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /ready
              port: 8181
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          resources:
            limits:
              memory: 170Mi
            requests:
              cpu: 100m
              memory: 70Mi
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              add:
              - NET_BIND_SERVICE
              drop:
              - all
            readOnlyRootFilesystem: true
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /etc/coredns
            name: config-volume
            readOnly: true
        dnsPolicy: Default
        nodeSelector:
          kubernetes.io/os: linux
        priorityClassName: system-cluster-critical
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext: {}
        serviceAccount: coredns
        serviceAccountName: coredns
        terminationGracePeriodSeconds: 30
        tolerations:
        - key: CriticalAddonsOnly
          operator: Exists
        - effect: NoSchedule
          key: node-role.kubernetes.io/control-plane
        volumes:
        - configMap:
            defaultMode: 420
            items:
            - key: Corefile
              path: Corefile
            name: coredns
          name: config-volume
  status:
    availableReplicas: 2
    conditions:
    - lastTransitionTime: "2023-09-28T13:27:20Z"
      lastUpdateTime: "2023-09-28T13:27:20Z"
      message: ReplicaSet "coredns-5dd5756b68" has successfully progressed.
      reason: NewReplicaSetAvailable
      status: "True"
      type: Progressing
    - lastTransitionTime: "2023-09-28T16:19:12Z"
      lastUpdateTime: "2023-09-28T16:19:12Z"
      message: Deployment has minimum availability.
      reason: MinimumReplicasAvailable
      status: "True"
      type: Available
    observedGeneration: 1
    readyReplicas: 2
    replicas: 2
    updatedReplicas: 2
- apiVersion: apps/v1
  kind: ReplicaSet
  metadata:
    annotations:
      deployment.kubernetes.io/desired-replicas: "2"
      deployment.kubernetes.io/max-replicas: "3"
      deployment.kubernetes.io/revision: "1"
    creationTimestamp: "2023-09-28T13:05:28Z"
    generation: 1
    labels:
      k8s-app: kube-dns
      pod-template-hash: 5dd5756b68
    name: coredns-5dd5756b68
    namespace: kube-system
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: Deployment
      name: coredns
      uid: b6f55086-13cf-4abb-aefe-3736940f9d1d
    resourceVersion: "8780"
    uid: 38bd5fb5-bfdc-48b4-a06f-7bf28ae75e5f
  spec:
    replicas: 2
    selector:
      matchLabels:
        k8s-app: kube-dns
        pod-template-hash: 5dd5756b68
    template:
      metadata:
        creationTimestamp: null
        labels:
          k8s-app: kube-dns
          pod-template-hash: 5dd5756b68
      spec:
        affinity:
          podAntiAffinity:
            preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchExpressions:
                  - key: k8s-app
                    operator: In
                    values:
                    - kube-dns
                topologyKey: kubernetes.io/hostname
              weight: 100
        containers:
        - args:
          - -conf
          - /etc/coredns/Corefile
          image: registry.k8s.io/coredns/coredns:v1.10.1
          imagePullPolicy: IfNotPresent
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /health
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 60
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
          name: coredns
          ports:
          - containerPort: 53
            name: dns
            protocol: UDP
          - containerPort: 53
            name: dns-tcp
            protocol: TCP
          - containerPort: 9153
            name: metrics
            protocol: TCP
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /ready
              port: 8181
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          resources:
            limits:
              memory: 170Mi
            requests:
              cpu: 100m
              memory: 70Mi
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              add:
              - NET_BIND_SERVICE
              drop:
              - all
            readOnlyRootFilesystem: true
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /etc/coredns
            name: config-volume
            readOnly: true
        dnsPolicy: Default
        nodeSelector:
          kubernetes.io/os: linux
        priorityClassName: system-cluster-critical
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext: {}
        serviceAccount: coredns
        serviceAccountName: coredns
        terminationGracePeriodSeconds: 30
        tolerations:
        - key: CriticalAddonsOnly
          operator: Exists
        - effect: NoSchedule
          key: node-role.kubernetes.io/control-plane
        volumes:
        - configMap:
            defaultMode: 420
            items:
            - key: Corefile
              path: Corefile
            name: coredns
          name: config-volume
  status:
    fullyLabeledReplicas: 2
    observedGeneration: 1
    replicas: 2
kind: List
metadata:
  resourceVersion: ""

I was finally able to solve it. It was the formatting of /etc/containerd/config.toml which is now corrected:

version = 2

[plugins]
[plugins.“io.containerd.grpc.v1.cri”]
[plugins.“io.containerd.grpc.v1.cri”.containerd]
[plugins.“io.containerd.grpc.v1.cri”.containerd.runtimes]
[plugins.“io.containerd.grpc.v1.cri”.containerd.runtimes.runc]
runtime_type = “io.containerd.runc.v2”
[plugins.“io.containerd.grpc.v1.cri”.containerd.runtimes.runc.options]
SystemdCgroup = true

~