Deployment issue with no apparent cause

Cluster information:

Kubernetes version: 1.34.2
Cloud being used: bare-metal
Installation method: Talos
Host OS: Talos
CNI and version: Cilium 1.18.0

Hello,

I am contacting you because I am encountering a problem.

I am trying to deploy a new service on my Kubernetes cluster, but I am getting this message:
0/9 nodes are available: 3 node(s) had untolerated taint(s), 6 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/9 nodes are available: 3 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.

I checked the resources but I don’t see what could be causing the problem.
Here is some information that may be useful:

kubectl top nodes
NAME                             CPU(cores)   CPU(%)   MEMORY(bytes)   MEMORY(%)
hetzner-fsn-pve1-k8s-cp-01       370m         18%      3804Mi          52%
hetzner-fsn-pve1-k8s-worker-01   657m         11%      7020Mi          45%
hetzner-fsn-pve1-k8s-worker-02   586m         9%       8722Mi          56%
hetzner-fsn-pve2-k8s-cp-01       281m         14%      2868Mi          87%
hetzner-fsn-pve2-k8s-worker-01   729m         14%      7386Mi          47%
hetzner-fsn-pve2-k8s-worker-02   767m         15%      8895Mi          57%
ovh-gra-pve1-k8s-cp-01           222m         7%       2004Mi          61%
ovh-gra-pve1-k8s-worker-01       473m         9%       5167Mi          33%
ovh-gra-pve1-k8s-worker-02       609m         12%      7429Mi          47%
kubectl describe nodes
Name:               hetzner-fsn-pve1-k8s-cp-01
Roles:              control-plane
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    extensions.talos.dev/iscsi-tools=v0.2.0
                    extensions.talos.dev/qemu-guest-agent=10.0.2
                    extensions.talos.dev/util-linux-tools=2.41.1
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=hetzner-fsn-pve1-k8s-cp-01
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node.kubernetes.io/exclude-from-external-load-balancers=
                    topology.kubernetes.io/zone=hetzner-fsn-pve1
Annotations:        extensions.talos.dev/schematic: 88d1f7a5c4f1d3aba7df787c448c1d3d008ed29cfb34af53fa0df4336a56040b
                    freelens.app/resource-version: v1
                    node.alpha.kubernetes.io/ttl: 0
                    talos.dev/owned-annotations: ["extensions.talos.dev/schematic"]
                    talos.dev/owned-labels:
                      ["extensions.talos.dev/iscsi-tools","extensions.talos.dev/qemu-guest-agent","extensions.talos.dev/util-linux-tools","node-role.kubernetes....
                    talos.dev/owned-taints: ["node-role.kubernetes.io/control-plane"]
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 25 Feb 2026 11:13:58 +0100
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  hetzner-fsn-pve1-k8s-cp-01
  AcquireTime:     <unset>
  RenewTime:       Mon, 09 Mar 2026 15:47:37 +0100
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Wed, 25 Feb 2026 11:15:33 +0100   Wed, 25 Feb 2026 11:15:33 +0100   CiliumIsUp                   Cilium is running on this node
  MemoryPressure       False   Mon, 09 Mar 2026 15:46:06 +0100   Wed, 25 Feb 2026 11:13:58 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Mon, 09 Mar 2026 15:46:06 +0100   Wed, 25 Feb 2026 11:13:58 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Mon, 09 Mar 2026 15:46:06 +0100   Wed, 25 Feb 2026 11:13:58 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Mon, 09 Mar 2026 15:46:06 +0100   Wed, 25 Feb 2026 11:15:40 +0100   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.150.18
  Hostname:    hetzner-fsn-pve1-k8s-cp-01
Capacity:
  cpu:                2
  ephemeral-storage:  31500Mi
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             8112588Ki
  pods:               110
Allocatable:
  cpu:                1950m
  ephemeral-storage:  29458694095
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             7485900Ki
  pods:               110
System Info:
  Machine ID:                 4d4027cb62f7d2851b54a783cfd703e8
  System UUID:                18270efc-ab4a-4efc-9a61-086fb0180038
  Boot ID:                    30b1419b-866c-40f5-833b-c797b9845986
  Kernel Version:             6.12.57-talos
  OS Image:                   Talos (v1.11.5)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://2.1.5
  Kubelet Version:            v1.34.2
  Kube-Proxy Version:
  PodCIDR:                      10.244.2.0/24
  PodCIDRs:                     10.244.2.0/24
  Non-terminated Pods:          (9 in total)
  Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                360m (18%)   0 (0%)
  memory             842Mi (11%)  0 (0%)
  ephemeral-storage  0 (0%)       0 (0%)
  hugepages-1Gi      0 (0%)       0 (0%)
  hugepages-2Mi      0 (0%)       0 (0%)


--- 

Name:               hetzner-fsn-pve1-k8s-worker-01
Roles:              worker
Labels:             bandwidth=1G
                    beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    extensions.talos.dev/iscsi-tools=v0.2.0
                    extensions.talos.dev/qemu-guest-agent=10.0.2
                    extensions.talos.dev/util-linux-tools=2.41.1
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=hetzner-fsn-pve1-k8s-worker-01
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/worker=
                    topology.kubernetes.io/zone=hetzner-fsn-pve1
Annotations:        csi.volume.kubernetes.io/nodeid: {"driver.longhorn.io":"hetzner-fsn-pve1-k8s-worker-01"}
                    extensions.talos.dev/schematic: 88d1f7a5c4f1d3aba7df787c448c1d3d008ed29cfb34af53fa0df4336a56040b
                    freelens.app/resource-version: v1
                    node.alpha.kubernetes.io/ttl: 0
                    talos.dev/owned-annotations: ["extensions.talos.dev/schematic"]
                    talos.dev/owned-labels: ["extensions.talos.dev/iscsi-tools","extensions.talos.dev/qemu-guest-agent","extensions.talos.dev/util-linux-tools"]
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 25 Feb 2026 07:37:41 +0100
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  hetzner-fsn-pve1-k8s-worker-01
  AcquireTime:     <unset>
  RenewTime:       Mon, 09 Mar 2026 15:47:42 +0100
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Wed, 25 Feb 2026 07:38:30 +0100   Wed, 25 Feb 2026 07:38:30 +0100   CiliumIsUp                   Cilium is running on this node
  MemoryPressure       False   Mon, 09 Mar 2026 15:43:14 +0100   Wed, 25 Feb 2026 07:37:41 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Mon, 09 Mar 2026 15:43:14 +0100   Wed, 25 Feb 2026 07:37:41 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Mon, 09 Mar 2026 15:43:14 +0100   Wed, 25 Feb 2026 07:37:41 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Mon, 09 Mar 2026 15:43:14 +0100   Wed, 25 Feb 2026 07:37:43 +0100   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.150.20
  Hostname:    hetzner-fsn-pve1-k8s-worker-01
Capacity:
  cpu:                6
  ephemeral-storage:  236300Mi
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             16356408Ki
  pods:               110
Allocatable:
  cpu:                5950m
  ephemeral-storage:  222732222095
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             15860792Ki
  pods:               110
System Info:
  Machine ID:                 01a345ebd451832f1c0239045faedcc5
  System UUID:                086f66b6-0d1b-4f2e-b5c3-8ea781801363
  Boot ID:                    4470134d-0e2f-4002-991e-f0dfed1478b7
  Kernel Version:             6.12.57-talos
  OS Image:                   Talos (v1.11.5)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://2.1.5
  Kubelet Version:            v1.34.2
  Kube-Proxy Version:
PodCIDR:                      10.244.8.0/24
PodCIDRs:                     10.244.8.0/24
Non-terminated Pods:          (40 in total)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests          Limits
  --------           --------          ------
  cpu                2424m (40%)       2990m (50%)
  memory             7850722176 (48%)  10694577664 (65%)
  ephemeral-storage  50Mi (0%)         2Gi (0%)
  hugepages-1Gi      0 (0%)            0 (0%)
  hugepages-2Mi      0 (0%)            0 (0%)

---

Name:               hetzner-fsn-pve1-k8s-worker-02
Roles:              worker
Labels:             bandwidth=1G
                    beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    extensions.talos.dev/iscsi-tools=v0.2.0
                    extensions.talos.dev/qemu-guest-agent=10.0.2
                    extensions.talos.dev/util-linux-tools=2.41.1
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=hetzner-fsn-pve1-k8s-worker-02
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/worker=
                    topology.kubernetes.io/zone=hetzner-fsn-pve1
Annotations:        csi.volume.kubernetes.io/nodeid: {"driver.longhorn.io":"hetzner-fsn-pve1-k8s-worker-02"}
                    extensions.talos.dev/schematic: 88d1f7a5c4f1d3aba7df787c448c1d3d008ed29cfb34af53fa0df4336a56040b
                    freelens.app/resource-version: v1
                    node.alpha.kubernetes.io/ttl: 0
                    talos.dev/owned-annotations: ["extensions.talos.dev/schematic"]
                    talos.dev/owned-labels: ["extensions.talos.dev/iscsi-tools","extensions.talos.dev/qemu-guest-agent","extensions.talos.dev/util-linux-tools"]
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 25 Feb 2026 07:37:36 +0100
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  hetzner-fsn-pve1-k8s-worker-02
  AcquireTime:     <unset>
  RenewTime:       Mon, 09 Mar 2026 15:47:38 +0100
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Wed, 25 Feb 2026 07:38:26 +0100   Wed, 25 Feb 2026 07:38:26 +0100   CiliumIsUp                   Cilium is running on this node
  MemoryPressure       False   Mon, 09 Mar 2026 15:43:51 +0100   Wed, 25 Feb 2026 07:37:36 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Mon, 09 Mar 2026 15:43:51 +0100   Wed, 25 Feb 2026 07:37:36 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Mon, 09 Mar 2026 15:43:51 +0100   Wed, 25 Feb 2026 07:37:36 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Mon, 09 Mar 2026 15:43:51 +0100   Wed, 25 Feb 2026 07:37:37 +0100   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.150.21
  Hostname:    hetzner-fsn-pve1-k8s-worker-02
Capacity:
  cpu:                6
  ephemeral-storage:  236300Mi
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             16356408Ki
  pods:               110
Allocatable:
  cpu:                5950m
  ephemeral-storage:  222732222095
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             15860792Ki
  pods:               110
System Info:
  Machine ID:                 282efd5efb25736fea975e68a0de89b1
  System UUID:                8eec08a9-0819-4450-a2c8-cca8e93fee10
  Boot ID:                    580dacca-56d2-4f40-b92c-79f2f239c04a
  Kernel Version:             6.12.57-talos
  OS Image:                   Talos (v1.11.5)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://2.1.5
  Kubelet Version:            v1.34.2
  Kube-Proxy Version:
PodCIDR:                      10.244.1.0/24
PodCIDRs:                     10.244.1.0/24
Non-terminated Pods:          (41 in total)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests      Limits
  --------           --------      ------
  cpu                3624m (60%)   4825m (81%)
  memory             9848Mi (63%)  13014Mi (84%)
  ephemeral-storage  100Mi (0%)    4Gi (1%)
  hugepages-1Gi      0 (0%)        0 (0%)
  hugepages-2Mi      0 (0%)        0 (0%)

---


Name:               hetzner-fsn-pve2-k8s-cp-01
Roles:              control-plane
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    extensions.talos.dev/iscsi-tools=v0.2.0
                    extensions.talos.dev/qemu-guest-agent=10.0.2
                    extensions.talos.dev/util-linux-tools=2.41.1
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=hetzner-fsn-pve2-k8s-cp-01
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node.kubernetes.io/exclude-from-external-load-balancers=
                    topology.kubernetes.io/zone=hetzner-fsn-pve2
Annotations:        extensions.talos.dev/schematic: 88d1f7a5c4f1d3aba7df787c448c1d3d008ed29cfb34af53fa0df4336a56040b
                    freelens.app/resource-version: v1
                    node.alpha.kubernetes.io/ttl: 0
                    talos.dev/owned-annotations: ["extensions.talos.dev/schematic"]
                    talos.dev/owned-labels:
                      ["extensions.talos.dev/iscsi-tools","extensions.talos.dev/qemu-guest-agent","extensions.talos.dev/util-linux-tools","node-role.kubernetes....
                    talos.dev/owned-taints: ["node-role.kubernetes.io/control-plane"]
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 25 Feb 2026 07:51:56 +0100
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  hetzner-fsn-pve2-k8s-cp-01
  AcquireTime:     <unset>
  RenewTime:       Mon, 09 Mar 2026 15:47:39 +0100
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Wed, 25 Feb 2026 07:52:32 +0100   Wed, 25 Feb 2026 07:52:32 +0100   CiliumIsUp                   Cilium is running on this node
  MemoryPressure       False   Mon, 09 Mar 2026 15:45:35 +0100   Thu, 26 Feb 2026 07:20:48 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Mon, 09 Mar 2026 15:45:35 +0100   Thu, 26 Feb 2026 07:20:48 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Mon, 09 Mar 2026 15:45:35 +0100   Thu, 26 Feb 2026 07:20:48 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Mon, 09 Mar 2026 15:45:35 +0100   Thu, 26 Feb 2026 07:20:48 +0100   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.150.34
  Hostname:    hetzner-fsn-pve2-k8s-cp-01
Capacity:
  cpu:                2
  ephemeral-storage:  19212Mi
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3989960Ki
  pods:               110
Allocatable:
  cpu:                1950m
  ephemeral-storage:  17862282415
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3363272Ki
  pods:               110
System Info:
  Machine ID:                 d4d28d9656eeaa751fadbdc6301d3ad0
  System UUID:                2af531db-d12c-451c-b0b8-68c85765c152
  Boot ID:                    24cf52ce-e941-4682-a4c9-9d0259a56b02
  Kernel Version:             6.12.57-talos
  OS Image:                   Talos (v1.11.5)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://2.1.5
  Kubelet Version:            v1.34.2
  Kube-Proxy Version:
PodCIDR:                      10.244.14.0/24
PodCIDRs:                     10.244.14.0/24
Non-terminated Pods:          (8 in total)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                360m (18%)   0 (0%)
  memory             842Mi (25%)  0 (0%)
  ephemeral-storage  0 (0%)       0 (0%)
  hugepages-1Gi      0 (0%)       0 (0%)
  hugepages-2Mi      0 (0%)       0 (0%)

---

Name:               hetzner-fsn-pve2-k8s-worker-01
Roles:              worker
Labels:             bandwidth=1G
                    beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    extensions.talos.dev/iscsi-tools=v0.2.0
                    extensions.talos.dev/qemu-guest-agent=10.0.2
                    extensions.talos.dev/util-linux-tools=2.41.1
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=hetzner-fsn-pve2-k8s-worker-01
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/worker=
                    topology.kubernetes.io/zone=hetzner-fsn-pve2
Annotations:        csi.volume.kubernetes.io/nodeid: {"driver.longhorn.io":"hetzner-fsn-pve2-k8s-worker-01"}
                    extensions.talos.dev/schematic: 88d1f7a5c4f1d3aba7df787c448c1d3d008ed29cfb34af53fa0df4336a56040b
                    freelens.app/resource-version: v1
                    node.alpha.kubernetes.io/ttl: 0
                    talos.dev/owned-annotations: ["extensions.talos.dev/schematic"]
                    talos.dev/owned-labels: ["extensions.talos.dev/iscsi-tools","extensions.talos.dev/qemu-guest-agent","extensions.talos.dev/util-linux-tools"]
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 25 Feb 2026 07:46:27 +0100
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  hetzner-fsn-pve2-k8s-worker-01
  AcquireTime:     <unset>
  RenewTime:       Mon, 09 Mar 2026 15:47:39 +0100
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Wed, 25 Feb 2026 07:47:52 +0100   Wed, 25 Feb 2026 07:47:52 +0100   CiliumIsUp                   Cilium is running on this node
  MemoryPressure       False   Mon, 09 Mar 2026 15:43:07 +0100   Wed, 25 Feb 2026 07:46:27 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Mon, 09 Mar 2026 15:43:07 +0100   Wed, 25 Feb 2026 07:46:27 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Mon, 09 Mar 2026 15:43:07 +0100   Wed, 25 Feb 2026 07:46:27 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Mon, 09 Mar 2026 15:43:07 +0100   Wed, 25 Feb 2026 07:46:29 +0100   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.150.35
  Hostname:    hetzner-fsn-pve2-k8s-worker-01
Capacity:
  cpu:                5
  ephemeral-storage:  399204956Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             16356772Ki
  pods:               110
Allocatable:
  cpu:                4950m
  ephemeral-storage:  367638851385
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             15861156Ki
  pods:               110
System Info:
  Machine ID:                 49cfd62dcbcbdf5679c3682d1253edf3
  System UUID:                91ea61b8-8bc4-4e7e-b784-03cd86c327a6
  Boot ID:                    3d88c6e3-d1da-4dc1-a736-c3c857ac8679
  Kernel Version:             6.12.57-talos
  OS Image:                   Talos (v1.11.5)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://2.1.5
  Kubelet Version:            v1.34.2
  Kube-Proxy Version:
PodCIDR:                      10.244.11.0/24
PodCIDRs:                     10.244.11.0/24
Non-terminated Pods:          (36 in total)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests          Limits
  --------           --------          ------
  cpu                2469m (49%)       3550m (71%)
  memory             5922186496 (36%)  8278166016 (50%)
  ephemeral-storage  100Mi (0%)        4Gi (1%)
  hugepages-1Gi      0 (0%)            0 (0%)
  hugepages-2Mi      0 (0%)            0 (0%)

---

Name:               hetzner-fsn-pve2-k8s-worker-02
Roles:              worker
Labels:             bandwidth=1G
                    beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    extensions.talos.dev/iscsi-tools=v0.2.0
                    extensions.talos.dev/qemu-guest-agent=10.0.2
                    extensions.talos.dev/util-linux-tools=2.41.1
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=hetzner-fsn-pve2-k8s-worker-02
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/worker=
                    topology.kubernetes.io/zone=hetzner-fsn-pve2
Annotations:        csi.volume.kubernetes.io/nodeid: {"driver.longhorn.io":"hetzner-fsn-pve2-k8s-worker-02"}
                    extensions.talos.dev/schematic: 88d1f7a5c4f1d3aba7df787c448c1d3d008ed29cfb34af53fa0df4336a56040b
                    freelens.app/resource-version: v1
                    node.alpha.kubernetes.io/ttl: 0
                    talos.dev/owned-annotations: ["extensions.talos.dev/schematic"]
                    talos.dev/owned-labels: ["extensions.talos.dev/iscsi-tools","extensions.talos.dev/qemu-guest-agent","extensions.talos.dev/util-linux-tools"]
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 25 Feb 2026 07:46:28 +0100
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  hetzner-fsn-pve2-k8s-worker-02
  AcquireTime:     <unset>
  RenewTime:       Mon, 09 Mar 2026 15:47:37 +0100
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Wed, 25 Feb 2026 07:47:23 +0100   Wed, 25 Feb 2026 07:47:23 +0100   CiliumIsUp                   Cilium is running on this node
  MemoryPressure       False   Mon, 09 Mar 2026 15:44:20 +0100   Wed, 25 Feb 2026 07:46:28 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Mon, 09 Mar 2026 15:44:20 +0100   Wed, 25 Feb 2026 07:46:28 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Mon, 09 Mar 2026 15:44:20 +0100   Wed, 25 Feb 2026 07:46:28 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Mon, 09 Mar 2026 15:44:20 +0100   Wed, 25 Feb 2026 07:46:28 +0100   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.150.36
  Hostname:    hetzner-fsn-pve2-k8s-worker-02
Capacity:
  cpu:                5
  ephemeral-storage:  389900Mi
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             16356772Ki
  pods:               110
Allocatable:
  cpu:                4950m
  ephemeral-storage:  367687368095
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             15861156Ki
  pods:               110
System Info:
  Machine ID:                 82916e8725923e8b1c43e28ed76ab7a7
  System UUID:                3f89d8ed-538b-4e70-b8c2-af8f47cf5eb8
  Boot ID:                    40800264-7f8d-4e06-bf3f-dd823ee07f45
  Kernel Version:             6.12.57-talos
  OS Image:                   Talos (v1.11.5)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://2.1.5
  Kubelet Version:            v1.34.2
  Kube-Proxy Version:
PodCIDR:                      10.244.13.0/24
PodCIDRs:                     10.244.13.0/24
Non-terminated Pods:          (45 in total)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests          Limits
  --------           --------          ------
  cpu                2394m (48%)       2932m (59%)
  memory             6911471040 (42%)  11178833856 (68%)
  ephemeral-storage  0 (0%)            0 (0%)
  hugepages-1Gi      0 (0%)            0 (0%)
  hugepages-2Mi      0 (0%)            0 (0%)

---

Name:               ovh-gra-pve1-k8s-cp-01
Roles:              control-plane
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    extensions.talos.dev/iscsi-tools=v0.2.0
                    extensions.talos.dev/qemu-guest-agent=10.0.2
                    extensions.talos.dev/util-linux-tools=2.41.1
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=ovh-gra-pve1-k8s-cp-01
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        extensions.talos.dev/schematic: 88d1f7a5c4f1d3aba7df787c448c1d3d008ed29cfb34af53fa0df4336a56040b
                    flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"66:3a:38:a8:45:57"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.150.2
                    freelens.app/resource-version: v1
                    node.alpha.kubernetes.io/ttl: 0
                    talos.dev/owned-annotations: ["extensions.talos.dev/schematic"]
                    talos.dev/owned-labels:
                      ["extensions.talos.dev/iscsi-tools","extensions.talos.dev/qemu-guest-agent","extensions.talos.dev/util-linux-tools","node-role.kubernetes....
                    talos.dev/owned-taints: ["node-role.kubernetes.io/control-plane"]
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 19 Jan 2026 10:06:47 +0100
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  ovh-gra-pve1-k8s-cp-01
  AcquireTime:     <unset>
  RenewTime:       Mon, 09 Mar 2026 15:47:43 +0100
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Mon, 16 Feb 2026 08:31:04 +0100   Mon, 16 Feb 2026 08:31:04 +0100   CiliumIsUp                   Cilium is running on this node
  MemoryPressure       False   Mon, 09 Mar 2026 15:44:45 +0100   Thu, 26 Feb 2026 07:05:22 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Mon, 09 Mar 2026 15:44:45 +0100   Thu, 26 Feb 2026 07:05:22 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Mon, 09 Mar 2026 15:44:45 +0100   Thu, 26 Feb 2026 07:05:22 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Mon, 09 Mar 2026 15:44:45 +0100   Thu, 26 Feb 2026 07:11:48 +0100   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.150.2
  Hostname:    ovh-gra-pve1-k8s-cp-01
Capacity:
  cpu:                3
  ephemeral-storage:  31500Mi
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3989612Ki
  pods:               110
Allocatable:
  cpu:                2950m
  ephemeral-storage:  29458694095
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3362924Ki
  pods:               110
System Info:
  Machine ID:                 849eaa8d9c8cf841bd67bfb265cc5595
  System UUID:                bf7af26c-b7d4-4fb9-a3af-f9ba7dfda582
  Boot ID:                    efc3a684-76cc-4119-b37b-c7b2b477d4e4
  Kernel Version:             6.12.57-talos
  OS Image:                   Talos (v1.11.5)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://2.1.5
  Kubelet Version:            v1.34.2
  Kube-Proxy Version:
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (9 in total)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                460m (15%)   200m (6%)
  memory             912Mi (27%)  170Mi (5%)
  ephemeral-storage  0 (0%)       0 (0%)
  hugepages-1Gi      0 (0%)       0 (0%)
  hugepages-2Mi      0 (0%)       0 (0%)
Events:              <none>

---

Name:               ovh-gra-pve1-k8s-worker-01
Roles:              worker
Labels:             bandwidth=250M
                    beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    dedicated=postgresql
                    extensions.talos.dev/iscsi-tools=v0.2.0
                    extensions.talos.dev/qemu-guest-agent=10.0.2
                    extensions.talos.dev/util-linux-tools=2.41.1
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=ovh-gra-pve1-k8s-worker-01
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/worker=
                    topology.kubernetes.io/zone=ovh-gra-1
Annotations:        csi.volume.kubernetes.io/nodeid: {"driver.longhorn.io":"ovh-gra-pve1-k8s-worker-01"}
                    extensions.talos.dev/schematic: 88d1f7a5c4f1d3aba7df787c448c1d3d008ed29cfb34af53fa0df4336a56040b
                    flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"22:e3:18:55:20:48"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.150.4
                    node.alpha.kubernetes.io/ttl: 0
                    talos.dev/owned-annotations: ["extensions.talos.dev/schematic"]
                    talos.dev/owned-labels: ["extensions.talos.dev/iscsi-tools","extensions.talos.dev/qemu-guest-agent","extensions.talos.dev/util-linux-tools"]
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sat, 01 Feb 2025 12:59:07 +0100
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  ovh-gra-pve1-k8s-worker-01
  AcquireTime:     <unset>
  RenewTime:       Mon, 09 Mar 2026 15:47:44 +0100
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Mon, 16 Feb 2026 08:31:05 +0100   Mon, 16 Feb 2026 08:31:05 +0100   CiliumIsUp                   Cilium is running on this node
  MemoryPressure       False   Mon, 09 Mar 2026 15:45:49 +0100   Thu, 26 Feb 2026 07:11:59 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Mon, 09 Mar 2026 15:45:49 +0100   Thu, 26 Feb 2026 07:11:59 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Mon, 09 Mar 2026 15:45:49 +0100   Thu, 26 Feb 2026 07:11:59 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Mon, 09 Mar 2026 15:45:49 +0100   Thu, 26 Feb 2026 07:11:59 +0100   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.150.4
  Hostname:    ovh-gra-pve1-k8s-worker-01
Capacity:
  cpu:                5
  ephemeral-storage:  103180Mi
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             16356772Ki
  pods:               110
Allocatable:
  cpu:                4950m
  ephemeral-storage:  97104428895
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             15861156Ki
  pods:               110
System Info:
  Machine ID:                 aad2b34a1b515b466dea377944bacee7
  System UUID:                4e88ccd7-a5ea-4710-9b8e-ff7d4ab3270d
  Boot ID:                    c97a9350-f9f3-4819-a6ad-6198a295dfcb
  Kernel Version:             6.12.57-talos
  OS Image:                   Talos (v1.11.5)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://2.1.5
  Kubelet Version:            v1.34.2
  Kube-Proxy Version:
PodCIDR:                      10.244.18.0/24
PodCIDRs:                     10.244.18.0/24
Non-terminated Pods:          (30 in total)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests      Limits
  --------           --------      ------
  cpu                1194m (24%)   1200m (24%)
  memory             2669Mi (17%)  2883Mi (18%)
  ephemeral-storage  0 (0%)        0 (0%)
  hugepages-1Gi      0 (0%)        0 (0%)
  hugepages-2Mi      0 (0%)        0 (0%)

---

Name:               ovh-gra-pve1-k8s-worker-02
Roles:              worker
Labels:             app=garage
                    bandwidth=250M
                    beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    dedicated=postgresql
                    extensions.talos.dev/iscsi-tools=v0.2.0
                    extensions.talos.dev/qemu-guest-agent=10.0.2
                    extensions.talos.dev/util-linux-tools=2.41.1
                    k8slens-edit-resource-version=v1
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=ovh-gra-pve1-k8s-worker-02
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/worker=
                    topology.kubernetes.io/zone=ovh-gra-1
Annotations:        csi.volume.kubernetes.io/nodeid: {"driver.longhorn.io":"ovh-gra-pve1-k8s-worker-02"}
                    extensions.talos.dev/schematic: 88d1f7a5c4f1d3aba7df787c448c1d3d008ed29cfb34af53fa0df4336a56040b
                    flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"b6:97:48:9c:fb:ed"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.150.5
                    node.alpha.kubernetes.io/ttl: 0
                    talos.dev/owned-annotations: ["extensions.talos.dev/schematic"]
                    talos.dev/owned-labels: ["extensions.talos.dev/iscsi-tools","extensions.talos.dev/qemu-guest-agent","extensions.talos.dev/util-linux-tools"]
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sat, 01 Feb 2025 12:59:01 +0100
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  ovh-gra-pve1-k8s-worker-02
  AcquireTime:     <unset>
  RenewTime:       Mon, 09 Mar 2026 15:47:46 +0100
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Mon, 16 Feb 2026 08:31:06 +0100   Mon, 16 Feb 2026 08:31:06 +0100   CiliumIsUp                   Cilium is running on this node
  MemoryPressure       False   Mon, 09 Mar 2026 15:44:00 +0100   Thu, 26 Feb 2026 07:11:59 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Mon, 09 Mar 2026 15:44:00 +0100   Thu, 26 Feb 2026 07:11:59 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Mon, 09 Mar 2026 15:44:00 +0100   Thu, 26 Feb 2026 07:11:59 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Mon, 09 Mar 2026 15:44:00 +0100   Thu, 26 Feb 2026 07:11:59 +0100   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.150.5
  Hostname:    ovh-gra-pve1-k8s-worker-02
Capacity:
  cpu:                5
  ephemeral-storage:  103180Mi
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             16356768Ki
  pods:               110
Allocatable:
  cpu:                4950m
  ephemeral-storage:  97104428895
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             15861152Ki
  pods:               110
System Info:
  Machine ID:                 97d47a06d17fb4d52468cd2a92fa5088
  System UUID:                1b8f53a0-d76d-440c-9d68-14ca389a41a1
  Boot ID:                    23b84fda-f580-4b8f-ba44-548d029f8fd6
  Kernel Version:             6.12.57-talos
  OS Image:                   Talos (v1.11.5)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://2.1.5
  Kubelet Version:            v1.34.2
  Kube-Proxy Version:
PodCIDR:                      10.244.17.0/24
PodCIDRs:                     10.244.17.0/24
Non-terminated Pods:          (29 in total)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests      Limits
  --------           --------      ------
  cpu                1740m (35%)   2160m (43%)
  memory             4327Mi (27%)  5869Mi (37%)
  ephemeral-storage  50Mi (0%)     2Gi (2%)
  hugepages-1Gi      0 (0%)        0 (0%)
  hugepages-2Mi      0 (0%)        0 (0%)

I don’t think I have any memory or CPU issues, and Longhorn is working fine too.
I’m having this problem with deployments with and without resource requests/limits, with and without affinity.
Only my control planes are unschedulable (which is intentional, as I don’t want any workload on them).

Do you have any idea where this might be coming from? It appeared overnight.
The other deployments already in place seem to be working, but I’m afraid that if one of them needs to restart, it won’t be able to do so properly.
If you need any further information, please don’t hesitate to get back to me.

Thank you in advance for your help.

Well, I finally managed to fix my problem; the error was on my end.

In the Moco configuration, I had configured the anti-affinity incorrectly :

  podTemplate:
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchLabels:
                  app.kubernetes.io/instance:
              topologyKey: topology.kubernetes.io/zone

There was nothing in “instance”; I had forgotten to fill in the label…

However, I don’t understand why this caused a problem, but anyway, it’s sorted now.

This caused problems with the kube-scheduler, which could not filter the right nodes, let alone score them.

3 node(s) had untolerated taint(s) : Because your pods don’t have toleration for these nodes

6 node(s) didn't match pod anti-affinity rules: Because of your affinity configuration, the scheduler is supposed to select nodes that don’t have any pods running this label app.kubernetes.io/instance: empty, which none of these nodes satisfy?

You see No Preemption victims found for incoming pod : If no Node is found that satisfies all the specified requirements of the Pod (i.e in your case due to your anti-affinity issue), preemption logic is triggered for the pending Pod. Preemption logic tries to find a Node where removal of one or more Pods with lower priority would allow the new pod (higher priority) to be scheduled.