Unable to initialize kubeadm ha

Cluster information:

Kubernetes version: 1.28
Cloud being used: bare-metal
Installation method: kubeadm
Host OS: ubuntu 22.04 LTS
CNI and version: flannel v0.24.4
CRI and version: containerd v1.6.28

Issue

Hello, I am failing to initialize a bare-metal kubeadm deployment in proxmox ve. I have an LXC container running Nginx Proxy Manager as a front-end for my kubeadm created api servers/control planes. During initialization I get the following error: "
Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- ‘systemctl status kubelet’
- ‘journalctl -xeu kubelet’

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- ‘crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause’
Once you have found the failing container, you can inspect its logs with:
- ‘crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID’
error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher"

But when running sudo journalctl -xeu kubelet it generates the following messages/errors: “https://argocd-kubeadm.kmartinez.net:443/api/v1/namespaces/default/events”: tls: failed to verify certificate: x509: certificate signed by unknown authority’(may retry after sleeping)"

I have spent hours configuring and ensuring systemd is being used and all lookgs good but I can’t pinpoint where this x.509 error is coming from.\

Any help would be greatly appreciated.

Init config:

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.70.133
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: kubeadm-argocd-controller-1
  taints: null
certificateKey: /etc/kubernetes/pki/apiserver.key
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubeadm-homelab-cluster
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
controlPlaneEndpoint: argocd-kubeadm.kmartinez.net:443
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.28.0
networking:
  dnsDomain: cluster.argocd.kmartinez.net
  podSubnet: 10.244.0.0/16
scheduler: {}
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
#cgroupDriver: cgroupfs

This has been solved. My issue was the lb proxy (Nginx Proxy Manager). I changed to ha-proxy with the following config and now it works.

frontend kubernetes-frontend
  bind *:6443
  mode tcp
  option tcplog
  timeout client 10s
  default_backend kubernetes-backend

backend kubernetes-backend
  timeout connect 10s
  timeout server 10s
  mode tcp
  option tcp-check
  balance roundrobin

  server argocd-kubeadm-controller-1 192.168.70.133:6443 check
  server argocd-kubeadm-controller-2 192.168.70.52:6443 check


frontend nodeport-frontend
  bind *:30000-35000
  mode tcp
  option tcplog
  timeout client 10s
  default_backend nodeport-backend

backend nodeport-backend
  mode tcp
  timeout connect 10s
  timeout server 10s
  balance roundrobin

  server nodeport-0 192.168.70.133
  server nodeport-1 192.168.70.52