0/1 nodes are available: 1 node(s) had untolerated taint

What happened?

Hi, i have a problem when i launch a pod the state of the pod is pending.
The command “kubectl describe pods apache-7f6fddb7df-q229r” and the output is " Warning FailedScheduling 61s (x2 over 6m23s) default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling…"
Logs of the pod is none, the service kubelet, docker and containerd is running without any errors.
Journalctl not giving me anything.
My OS is Debian 11

What did you expect to happen?

I expected the state pod is running.
Please help me and thanks for the help.

How can we reproduce it (as minimally and precisely as possible)?

Install docker and kubernettes by the documentations.

Kubernetes version

Client Version: v1.26.1 Kustomize Version: v4.5.7 Server Version: v1.26.1

OS version

On Linux:

$ cat /etc/os-release
PRETTY_NAME=“Debian GNU/Linux 11 (bullseye)”
NAME=“Debian GNU/Linux”
VERSION_ID=“11”
VERSION=“11 (bullseye)”
VERSION_CODENAME=bullseye
ID=debian
HOME_URL=“https://www.debian.org/
SUPPORT_URL=“Debian -- User Support
BUG_REPORT_URL=“https://bugs.debian.org/

Linux docker 5.10.0-21-amd64 #1 SMP Debian 5.10.162-1 (2023-01-21) x86_64 GNU/Linux

2 Likes

You missed a step.

5 Likes

Thank you!

Hi,

I am getting

Events:
Type Reason Age From Message


Warning FailedScheduling 55s (x2 over 82s) default-scheduler 0/2 nodes are available: 1 Insufficient memory, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.

resource.yaml is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: resource-pod
labels:
app: resource
spec:
replicas: 2
selector:
matchLabels:
app: resource
template:
metadata:
labels:
app: resource
spec:
containers:
- name: resource-demo
image: polinux/stress
command: [“stress”]
args: [“–cpu”,“2”,“–vm”, “1”, “–vm-bytes”,“1G”, “–vm-hang”,“1”]
resources:
requests:
memory: “2Gi”
cpu: “1”
limits:
memory: “3Gi”
cpu: “1”
livenessProbe:
exec:
command:
- echo “hello”
failureThreshold: 2
periodSeconds: 2

I understood the issue, as I have requested for 2Gi of memory to start a pod and requested for 2 pods (2*2 = 4Gi) memory. but my node is having allocatable memory of 3849628Ki ie 3 Gi

kubectl describe node worker-node

Hostname: k8s-worker
Capacity:
cpu: 3
ephemeral-storage: 47109660Ki
hugepages-2Mi: 0
memory: 3849628Ki
pods: 110
Allocatable:
cpu: 3
ephemeral-storage: 43416262585
hugepages-2Mi: 0
memory: 3747228Ki
pods: 110