Cluster information:
Kubernetes version: v1.27.7
Cloud being used: Azure
Installation method: Helm
Host OS: Windows2019
I have a kubernetes deployment where my application container is running within a windows node. There are multiple pods on the node, but there aren’t any resource limit/requests put into place. kubectl top shows the node consistently around the 30-35% memory usage and am unsure why the pods are being restarted periodically. There aren’t any clues in the event history and describing the pod shows:
State: Running
Started: Thu, 18 Jul 2024 10:22:45 -0400
Last State: Terminated
Reason: Error
Exit Code: -1073741571
Started: Thu, 18 Jul 2024 10:19:41 -0400
Finished: Thu, 18 Jul 2024 10:22:18 -0400
Ready: True
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
aks-systemnode-21772420-vmss000000 325m 8% 8082Mi 64%
aksscale000000 32m 0% 3150Mi 32%
I am unsure what to make of the exit code and would appreciate any pointers as to how to find the definitive reason for a pod restart.