Kubeadm showing errors about reading configuration from the cluster and missing etcd certificates

I have 3 master-worker nodes in the cluster. on one of them kubeadm is showing errors when checking for expired certificates:

# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configuration

CERTIFICATE                         EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                          Jul 31, 2025 11:44 UTC   343d            ca                      no      
apiserver                           Jul 31, 2025 11:44 UTC   343d            ca                      no      
!MISSING! apiserver-etcd-client                                                                      
apiserver-kubelet-client            Jul 31, 2025 11:44 UTC   343d            ca                      no      
controller-manager.conf             Jul 31, 2025 11:44 UTC   343d            ca                      no      
!MISSING! etcd-healthcheck-client                                                                    
!MISSING! etcd-peer                                                                                  
!MISSING! etcd-server                                                                                
front-proxy-client                  Jul 31, 2025 11:44 UTC   343d            front-proxy-ca          no      
scheduler.conf                      Jul 31, 2025 11:44 UTC   343d            ca                      no      

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      May 27, 2032 16:30 UTC   7y              no      
!MISSING! etcd-ca                                                
front-proxy-ca          May 27, 2032 16:30 UTC   7y              no   

on other nodes the output is different:

# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0822 13:04:00.381912 2829476 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [A.B.C.D]; the provided value is: [E.F.G.H]

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Jul 31, 2025 11:43 UTC   343d            ca                      no      
apiserver                  Jul 31, 2025 11:43 UTC   343d            ca                      no      
apiserver-kubelet-client   Jul 31, 2025 11:43 UTC   343d            ca                      no      
controller-manager.conf    Jul 31, 2025 11:43 UTC   343d            ca                      no      
front-proxy-client         Jul 31, 2025 11:43 UTC   343d            front-proxy-ca          no      
scheduler.conf             Jul 31, 2025 11:43 UTC   343d            ca                      no      

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      May 27, 2032 16:30 UTC   7y              no      
front-proxy-ca          May 27, 2032 16:30 UTC   7y              no      
  • I’ve checked /etc/etcd.env, all the certs are in place, the contents of the /etc/ssl/etcd/ssl/ is identical to that of other nodes
  • kubectl works normally on the sus node
  • no problems with pod creation/deletion and other activities on that node (I even managed to renew the cluster certificates), but I’m still afraid it’s gonna bite me in the REDACTED, if I, say, decide to upgrade the cluster

what can I do to debug this issue?

Cluster information:

Kubernetes version: 1.22
Cloud being used: bare-metal
Installation method: kubespray
Host OS: Debian 11
CNI and version: cilium 0.3.1
CRI and version: containerd 1.5.8