[Kubernetes Troubleshooting] pods running in k8s cluster lost connectivity to services outside of the cluster

We have a small Kubernetes cluster (6 agents) that was build using aks-engine in Azure on our Ubuntu 18.04 VMs.
At some point during the week kubelet suddenly crashed on 2 of the nodes, which made those nodes NotReady. The problem that we faced was that the whole networking layer in kubernetes became unavailable so our applications running on the other nodes couldn’t connect outside of the cluster to the database service.
I checked the logs for kubelet, etcd but I couldn’t find any reason why the other nodes than the ones that crashed were affected by those 2 nodes being down. Once those nodes were recovered, the connectivity got back and everything started to work again.

Any suggestions to what I can look at are welcome :slight_smile:

Cluster information:

Kubernetes version: Server Version: version.Info{Major:“1”, Minor:“18”, GitVersion:“v1.18.2”, GitCommit:“52c56ce7a8272c798dbc29846288d7cd9fbae032”, GitTreeState:“clean”, BuildDate:“2020-04-16T23:18:00Z”, GoVersion:“go1.13.6”, Compiler:“gc”, Platform:“linux/amd64”}
Cloud being used: Azure
Installation method: aks-engine used to build the cluster
Host OS: Ubuntu 18.04.4 LTS
CNI and version: kubenet
CRI and version: Docker version 3.0.13+azure, build dd360c7c0de8d9132a3965db6a59d3ae74f43ba7