A single NotReady master breaks all traffic in cluster

Howdy, Just wanting to get some advice,
We are currently running a 52 node cluster with 3 masters, all in AWS EC2’s built and managed with Kops and recently we have run into some issues that we are not fully sure of why they occur. Over the past few days, one of our master nodes was becoming NotReady because kube-api-server was under high load and therefore making kubelet unresponsive. We are rectifying by scaling the instance vertically, but the more troubling issue is that when this one master node goes into the NotReady state all routing in the other master nodes and cluster nodes all begin to fail. All of our services in the cluster begin to 503 and after investigation, it’s because every loadbalancer attached to the cluster reports all instances as unhealthy and thus causing downtime for all of the apps in the cluster. it seems like because of this one node, all routing in the cluster is broken. Also noticed that applications within the cluster can’t reach out to external services such as redis etc. It is very confusing to us as to why if one kube-api-server becomes unreachable (the one on the NotReady master node), why is this causing routing issues for all other nodes/services/ingress in the cluster? We thought that the cluster would be able to handle a single dead master and auto-recover without such a large issue.

Cluster information:

Kubernetes version: 1.26
Cloud being used: 52 node cluster with 3 masters, all in AWS EC2’s built and managed with Kops
Installation method: Kops
Host OS: Linux
CNI and version: Clico - 3.23.5