Issue: Certificates Renewed, But Kubernetes Pods Not Recreated

Issue: Certificates Renewed, But Kubernetes Pods Not Recreated


What I’m Facing:
I recently renewed all Kubernetes certificates across my cluster using kubeadm, and now pods are not being recreated, even though their controllers (like StatefulSets or custom resources) still exist. Critical workloads like the Elastic Operator and Kibana are not coming up. It seems like the API server or kubelet is not re-triggering pod scheduling after cert renewal.

Cluster Setup:
Component Info
Kubernetes v1.19 (kubeadm setup)
Nodes 3 Masters, 3 ETCD, 2 HAProxy, 2 Workers
Cloud : Bare metal / on-prem
OS Linux
CNI Plugin calico
Container Runtime containerd

What I Did:

Certificates were expired, so I renewed them on all master nodes:

sudo kubeadm certs renew all
Restarted kubelet on all nodes:

sudo systemctl restart kubelet
Verified etcd health:

etcdctl endpoint health
All endpoints are healthy.
Deleted and reapplied the Kibana CR (es-kib) using kubectl apply -f kibana.yml. Status shows green, association is Established, but no Kibana pods are created.
Elastic Operator StatefulSet is present, but no pods are created even after reapplying eck-operator.yaml.
Observations & Troubleshooting:
• kubectl get pods in all relevant namespaces (default, elastic-system, etc.) show no pods related to Kibana or the Elastic Operator.
• kubectl get statefulsets -n elastic-system shows:

NAME READY AGE
elastic-operator 1/1 421d
But kubectl get pods -n elastic-system returns:

No resources found in elastic-system namespace.
• OwnerReference on elastic-operator-0 pod was missing before it disappeared.
• Kibana CR status:

status:
associationStatus: Established
health: green
count: 1
availableNodes: 1
— but no actual pod is running for it.
• kubectl get crds | grep elastic confirms all required CRDs are present.
• kubectl get events -n elastic-system doesn’t show clear pod scheduling issues.
• Resource usage on nodes is normal — no taints or node pressure.

What I Need Help With:

Why aren’t the pods being recreated after certificate renewal and kubelet restart?
Is there any additional step required after cert renewal to get the controller (kubelet / controller-manager / operator) to recognize and recreate pods?
How can I force StatefulSets or CRDs to reconcile and recreate pods without wiping cluster state?
Could this be an issue with webhook validation or Elastic Operator itself not working due to stale TLS certs?
Additional Info:
• Reapplying the eck-operator.yaml file updates the StatefulSet, but still no pod is created.
• The secret elastic-webhook-server-cert still exists and seems valid.
• Restarting kubelet didn’t help.
• Operator is not managed by a deployment (only by a StatefulSet).

Would Appreciate Help:
I’ve hit a dead end and would appreciate any guidance on:
• Proper sequence after renewing Kubernetes certs
• Ensuring CRDs/controllers (like ECK) pick up the changes
• Forcing pod recreation cleanly
Let me know if any logs or further output would help!

Hi, Did you find a solution? I’m on the same boat and although i am renewing the certificates it is still looking at the expired certificates

check kube config yml file