Remaining nodes in 3 node cluster becoming "NotReady" when I power off 1 node

I’m quite new to this, so any help appreciated.
I have several 3 node mK8 clusters, on VMs on a single host with metallb, envoy northbound and nginx soundbound.
I am on 1.29.4

I’m running ~80 pods across the cluster.

When I power off 1 node, sometimes 1 or both of the remaining available nodes will report as NotReady. This doesnt happen all the time and always recovers in about 40-50 mins.
Nodes are stable in normal operation and will only do this on powering off a single node.
I see this behaviour on all my clusters.

example
I have host1, host2 and host3 running. I poweroff host1 and host3 also reports NotReady

labuser@hlxhost2:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
hlxhost1 NotReady 6d8h v1.29.4
hlxhost2 Ready 6d8h v1.29.4
hlxhost3 NotReady 6d8h v1.29.4
labuser@hlxhost2:~$ kubectl describe node hlxhost3
Name: hlxhost3
Roles:
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes/os=linux
kubernetes/arch=amd64
kubernetes/hostname=hlxhost3
kubernetes/os=linux
microk8s/cluster=true
node.kubernetes/microk8s-controlplane=microk8s-controlplane
Annotations: csi.volume.kubernetes/nodeid: {“rook-ceph.cephfs.csi.ceph”:“hlxhost3”,“rook-ceph.rbd.csi.ceph”:“hlxhost3”}
node.alpha.kubernetes: 0
projectcalico/IPv4Address: 10.173.128.166/22
projectcalico/IPv4VXLANTunnelAddr: 10.1.241.0
volumes.kubernetes/controller-managed-attach-detach: true
CreationTimestamp: Thu, 15 Aug 2024 11:32:49 +0000
Taints:
Unschedulable: false
Lease:
HolderIdentity: hlxhost3
AcquireTime:
RenewTime: Wed, 21 Aug 2024 19:39:28 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message


NetworkUnavailable False Wed, 21 Aug 2024 19:01:47 +0000 Wed, 21 Aug 2024 19:01:47 +0000 CalicoIsUp Calico is running on this node
MemoryPressure Unknown Wed, 21 Aug 2024 19:38:11 +0000 Wed, 21 Aug 2024 19:37:17 +0000 NodeStatusUnknown Kubelet stopped posting node status.
DiskPressure Unknown Wed, 21 Aug 2024 19:38:11 +0000 Wed, 21 Aug 2024 19:37:17 +0000 NodeStatusUnknown Kubelet stopped posting node status.
PIDPressure Unknown Wed, 21 Aug 2024 19:38:11 +0000 Wed, 21 Aug 2024 19:37:17 +0000 NodeStatusUnknown Kubelet stopped posting node status.
Ready Unknown Wed, 21 Aug 2024 19:38:11 +0000 Wed, 21 Aug 2024 19:37:17 +0000 NodeStatusUnknown Kubelet stopped posting node status.
Addresses:
InternalIP: 10.173.128.166
Hostname: hlxhost3
Capacity:
cpu: 16
ephemeral-storage: 512868736Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 49326588Ki
pods: 110
Allocatable:
cpu: 16
ephemeral-storage: 511820160Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 49224188Ki
pods: 110
System Info:
Machine ID: 5abdea857798464ebc7fa5511b836c8e
System UUID: 91051c42-c0fc-d95e-3629-64d194737e4e
Boot ID: da98730d-6e38-4cb6-bbba-227a59459bd6
Kernel Version: 5.15.0-107-generic
OS Image: Ubuntu 22.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.6.28
Kubelet Version: v1.29.4
Kube-Proxy Version: v1.29.4
Non-terminated Pods: (39 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age


csdm arango-arango-op-operator-7dd5648ff9-xz2sd 250m (1%) 1 (6%) 256Mi (0%) 256Mi (0%) 37m
csdm csdm-arango-agnt-o9qzob23-77dd28 100m (0%) 100m (0%) 10Mi (0%) 50Mi (0%) 36m
csdm csdm-arango-crdn-wt0klwf8-77dd28 100m (0%) 100m (0%) 10Mi (0%) 50Mi (0%) 33m
csdm csdm-arango-prmr-pvj3e3by-77dd28 100m (0%) 100m (0%) 10Mi (0%) 50Mi (0%) 36m
csdm csdm-device-cache-67859cc749-gq7w5 0 (0%) 0 (0%) 272Mi (0%) 2304Mi (4%) 37m
csdm csdm-elasticsearch-master-2 1 (6%) 0 (0%) 4Gi (8%) 0 (0%) 52m
csdm csdm-envoy-northbound-96f456569-h8sks 0 (0%) 0 (0%) 0 (0%) 0 (0%) 37m
csdm csdm-fluentbit-hpj29 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6d7h
csdm csdm-house-keeper-ff57f94dd-pjbpm 0 (0%) 0 (0%) 512Mi (1%) 2560Mi (5%) 37m
csdm csdm-kafka-1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 52m
csdm csdm-keycloak-2 10m (0%) 20m (0%) 32Mi (0%) 64Mi (0%) 52m
csdm csdm-licensing-55fbfd779c-2rhqp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 37m
csdm csdm-logstash-0 1 (6%) 0 (0%) 2Gi (4%) 3Gi (6%) 52m
csdm csdm-prometheus-alertmanager-1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 52m
csdm csdm-prometheus-prometheus-node-exporter-v9xmk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6d7h
csdm csdm-prometheus-server-2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 52m
csdm csdm-provm-7f9c4ff996-d6wc5 0 (0%) 0 (0%) 512Mi (1%) 2560Mi (5%) 37m
csdm csdm-rabbitmq-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 52m
csdm csdm-redis-master-node-1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 52m
csdm csdm-snmp-collector-5b54f476c7-9rwrh 0 (0%) 0 (0%) 64Mi (0%) 1Gi (2%) 37m
csdm csdm-snmpget-c-0 3 (18%) 0 (0%) 1280Mi (2%) 0 (0%) 31m
csdm csdm-ssd-5c955c5877-62mvt 0 (0%) 0 (0%) 272Mi (0%) 1280Mi (2%) 37m
csdm csdm-ssh-9f9bc6d7-ld7m8 0 (0%) 0 (0%) 144Mi (0%) 1280Mi (2%) 37m
csdm csdm-syslogng-c-v1-7mgfw 1200m (7%) 1200m (7%) 1Gi (2%) 1Gi (2%) 6d7h
csdm csdm-tcs-8f5877b88-g2rx9 200m (1%) 200m (1%) 256Mi (0%) 256Mi (0%) 158m
csdm csdm-templates-6b6fcb56bd-rcb6k 0 (0%) 0 (0%) 272Mi (0%) 2304Mi (4%) 37m
csdm csdm-toolbox-74cc4cb4f9-zj2b8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 37m
csdm csdm-zookeeper-0 250m (1%) 0 (0%) 256Mi (0%) 0 (0%) 52m
csdm druid-csdm-druid-historicals-1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 52m
csdm postgres-postgresql-ha-postgresql-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 52m
kube-system calico-node-8ntxx 250m (1%) 0 (0%) 0 (0%) 0 (0%) 6d8h
metallb-system speaker-t5nzk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6d8h
rook-ceph csi-cephfsplugin-k8597 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6d8h
rook-ceph csi-rbdplugin-ktjt9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6d8h
rook-ceph rook-ceph-mds-csdm-rwx-fs-b-74ffd6f978-797nj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 37m
rook-ceph rook-ceph-mgr-a-76cd586875-2wnbc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 37m
rook-ceph rook-ceph-mon-h-d4b58b68-4dqg8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 62m
rook-ceph rook-ceph-osd-2-556d94d6df-5jvtk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 158m
rook-ceph rook-ceph-tools-6465749568-6hpkl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 37m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits


cpu 7460m (46%) 2720m (17%)
memory 11326Mi (23%) 18134Mi (37%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message


Normal RegisteredNode 2m53s node-controller Node hlxhost3 event: Registered Node hlxhost3 in Controller
Normal NodeNotReady 2m13s node-controller Node hlxhost3 status is now: NodeNotReady

in fact if install vanilla microK8s out of the box, without any of my stuff added/changed and I power down a node, ie non-graceful shutdown, I see the same behaviour where the other nodes can become NotReady for 21 mins.