APi Server is showing "Liveness probe failed: HTTP probe failed with statuscode: 500" while describing the pod

Asking for help? Comment out what you need so we can get more information to help you!

Cluster information:

Kubernetes version: 1.26.9
Cloud being used: (put bare-metal if not on a public cloud) : bare-metal
Installation method: kubeadm
Host OS: ubuntu
CNI and version: flannel - 0.3.1
CRI and version: containerd - 1.6.21

Hi Team,
I’ve just upgraded my k8s cluster from v1.25.4 to v1.26.9 using the commands mentioned here

Everything (be it applications, pods etc) are working fine.
Except for the master node components. There pods are running fine but showing
“Startup probe failed with status code 403”
“Readiness probe failed with status code 500”
“Liveness probe failed with status code 500”

etcd also not returning any member list.

When I checked the apiserver endpoints, it is returning everything as ok

curl -k https://localhost:6443/readyz?verbose
[+]ping ok
[+]log ok
[+]etcd ok
[+]etcd-readiness ok
[+]informer-sync ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]shutdown ok
readyz check passed

Could someone please help in fixing the issue.
Please let me know if any additional information is required from my end.

Best,
Ankit Kaushik

Same problem, keeps restarting.

  • k8s v.1.27.6
  • calico v3.26.3
  • containerd v1.7.7
root@master2:~# kubectl events -n kube-system
LAST SEEN                  TYPE      REASON           OBJECT                                               MESSAGE
60m                        Normal    LeaderElection   Lease/kube-controller-manager                        master3.k8sCluster.com_8e9ed7b0-3695-4e33-ae88-7de812976aad became leader
60m                        Normal    LeaderElection   Lease/kube-scheduler                                 master3.k8sCluster.com_06f2ad9d-47e0-45b4-9a90-b6810f761b64 became leader
60m (x66 over 2d1h)        Warning   Unhealthy        Pod/kube-scheduler-master1.k8scluster.com            Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused
60m (x55 over 2d2h)        Normal    Started          Pod/kube-scheduler-master1.k8scluster.com            Started container kube-scheduler
60m (x55 over 2d2h)        Normal    Created          Pod/kube-scheduler-master1.k8scluster.com            Created container kube-scheduler
60m (x55 over 2d2h)        Normal    Pulled           Pod/kube-scheduler-master1.k8scluster.com            Container image "registry.aliyuncs.com/google_containers/kube-scheduler:v1.27.6" already present on machine
38m (x110 over 2d16h)      Warning   Unhealthy        Pod/kube-controller-manager-master3.k8scluster.com   Liveness probe failed: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused
38m (x74 over 2d16h)       Warning   Unhealthy        Pod/kube-scheduler-master3.k8scluster.com            Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused
38m                        Normal    LeaderElection   Lease/kube-controller-manager                        master2.k8sCluster.com_0004d72a-599f-47f9-af27-fc9448b076fb became leader
38m (x63 over 2d16h)       Normal    Created          Pod/kube-scheduler-master3.k8scluster.com            Created container kube-scheduler
38m (x66 over 2d16h)       Normal    Pulled           Pod/kube-controller-manager-master3.k8scluster.com   Container image "registry.aliyuncs.com/google_containers/kube-controller-manager:v1.27.6" already present on machine
38m (x66 over 2d16h)       Normal    Created          Pod/kube-controller-manager-master3.k8scluster.com   Created container kube-controller-manager
38m (x66 over 2d16h)       Normal    Started          Pod/kube-controller-manager-master3.k8scluster.com   Started container kube-controller-manager
38m (x63 over 2d16h)       Normal    Started          Pod/kube-scheduler-master3.k8scluster.com            Started container kube-scheduler
38m (x63 over 2d16h)       Normal    Pulled           Pod/kube-scheduler-master3.k8scluster.com            Container image "registry.aliyuncs.com/google_containers/kube-scheduler:v1.27.6" already present on machine
38m                        Normal    LeaderElection   Lease/kube-scheduler                                 master2.k8sCluster.com_92ebc9b2-f32b-4d93-9fb6-8fb59af74de8 became leader
16m (x450 over 2d17h)      Warning   Unhealthy        Pod/kube-apiserver-master2.k8scluster.com            Liveness probe failed: HTTP probe failed with statuscode: 500
16m (x350 over 2d2h)       Warning   Unhealthy        Pod/kube-apiserver-master1.k8scluster.com            Liveness probe failed: HTTP probe failed with statuscode: 500
16m (x439 over 2d17h)      Warning   Unhealthy        Pod/kube-apiserver-master3.k8scluster.com            Liveness probe failed: HTTP probe failed with statuscode: 500
16m (x1356 over 2d17h)     Warning   Unhealthy        Pod/kube-apiserver-master2.k8scluster.com            Readiness probe failed: HTTP probe failed with statuscode: 500
16m (x72 over 2d15h)       Warning   Unhealthy        Pod/kube-scheduler-master2.k8scluster.com            Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused
16m (x69 over 2d16h)       Normal    Started          Pod/kube-controller-manager-master2.k8scluster.com   Started container kube-controller-manager
16m (x63 over 2d16h)       Normal    Created          Pod/kube-scheduler-master2.k8scluster.com            Created container kube-scheduler
16m (x63 over 2d16h)       Normal    Started          Pod/kube-scheduler-master2.k8scluster.com            Started container kube-scheduler
16m (x63 over 2d16h)       Normal    Pulled           Pod/kube-scheduler-master2.k8scluster.com            Container image "registry.aliyuncs.com/google_containers/kube-scheduler:v1.27.6" already present on machine
16m (x122 over 2d16h)      Warning   Unhealthy        Pod/kube-controller-manager-master2.k8scluster.com   Liveness probe failed: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused
16m (x69 over 2d16h)       Normal    Created          Pod/kube-controller-manager-master2.k8scluster.com   Created container kube-controller-manager
16m (x69 over 2d16h)       Normal    Pulled           Pod/kube-controller-manager-master2.k8scluster.com   Container image "registry.aliyuncs.com/google_containers/kube-controller-manager:v1.27.6" already present on machine
16m                        Normal    LeaderElection   Lease/kube-scheduler                                 master3.k8sCluster.com_a53f4dc6-7a72-48f8-a813-b42101bfc6f3 became leader
16m                        Normal    LeaderElection   Lease/kube-controller-manager                        master1.k8sCluster.com_03fc3b0f-c001-4525-b255-1ddf69ea60fb became leader
9m19s (x1059 over 2d2h)    Warning   Unhealthy        Pod/kube-apiserver-master1.k8scluster.com            Readiness probe failed: HTTP probe failed with statuscode: 500
9m19s (x1397 over 2d17h)   Warning   Unhealthy        Pod/kube-apiserver-master3.k8scluster.com            Readiness probe failed: HTTP probe failed with statuscode: 500