VM's are recreated which impacted all pods running on

We have a project on Google kubernetes cluster with ngnix ingress controller to route traffic to our application containers , we have received application down notification on email and when we checked in cloud console we noticed all our pods were re deployed and when we checked VM uptime then all the VMs were recreated which cause this issue.
We checked cluster logs , and also pods logs but didn’t find any useful details and when we ask google support they simply said you do not have support plan so we can’t help in it.
Now i want to check why it happen and how can we check the issue related to it.

Asking for help? Comment out what you need so we can get more information to help you!

Cluster information:

Kubernetes version: 1.17.17-gke.2800
Cloud being used: (put bare-metal if not on a public cloud)
Installation method:
Host OS: default OS
CNI and version:
CRI and version:

You can format your yaml by highlighting it and pressing Ctrl-Shift-C, it will make your output easier to read.

Is there anything in the Kubernetes Event logs?

VMs can restart for various reasons. If all of your VMs restarted, it sounds like a cluster upgrade (assuming node auto-upgrade).

Make sure your apps have a PodDisruptionBudget which helps GKE drive the upgrade. Also look into the maintenance window config, which helps GKE avoid your most critical hours.