Cert renewal process failing

my cluster below has 3 masters and 5 workers. All running 1.15.3. My certs recently expired and I ran the kubeadm alpha certs check-expiration to check obviosuly and then upgrade them with kubeadm alpha certs renew all. Ran check again, and seemed fine and upgraded to the next year, but today when i came back to the cluster i could not run get nodes, etc. I followed what i thought was a simple process https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/ in the manual section. Even found this https://stackoverflow.com/questions/56320930/renew-kubernetes-pki-after-expired/56334732#56334732 and thought that it would help, but it didnt… Im lost here and my cluster is offline…

Cluster information:

Kubernetes version: 1.15.3
Cloud being used: AWS
Installation method: kubeadm with a custom config.yaml
Host OS: ubuntu 16.04
CNI and version: quay.io/coreos/flannel:v0.10.0-amd64
CRI and version: docker 18.09.7 build 2d0083d

config.yaml Preformatted text
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: “internal-cluster3.us-east-1.elb.amazonaws.com:6443”
imageRepository: artifactory.mycompany.com/docker/k8s/k8s.gcr.io
feature-gates: “CSINodeInfo=true,CSIDriverRegistry=true,CSIBlockVolume=true,VolumeSnapshotDataSource=true”

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
“CSINodeInfo”: true,
“CSIDriverRegistry”: true,
“CSIBlockVolume”: true

I found this and wondering if instead of doing all in the renew do i need to them one by one: