READY 0/1 state

Hi,
I am following a lab on Kubernetes and Mongodb but all the Pods are always in 0/1 state
what does it mean?
how do i make them READY 1/1

[root@master-node ~]# kubectl get pod
NAME                                 READY   STATUS    RESTARTS   AGE
mongo-express-78fcf796b8-wzgvx       0/1     Pending   0          3m41s
mongodb-deployment-8f6675bc5-qxj4g   0/1     Pending   0          160m
nginx-deployment-64bd7b69c-wp79g     0/1     Pending   0          4h44m

Check pod events and logs.

Events are available from describe.

kubectl describe pod mongo-express-78fcf796b8-wzgvx

Logs provide the stdout from the containers in the pod. -c container_name is only necessary for pods with more than 1 container.

kubectl logs mongo-express-78fcf796b8-wzgvx -c container_name

Thank you but what should I see in the logs?

Error messages that should be able to help you diagnose why the containers are stuck in the pending state. If the logs at that level, it’s time to start debugging at other levels.

here is the log but I don’t understand now which error I should see, thanks anyway to all of the help

[root@master-node ~]# kubectl describe pod mongo-express-78fcf796b8-wzgvx
Name:           mongo-express-78fcf796b8-wzgvx
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app=mongo-express
                pod-template-hash=78fcf796b8
Annotations:    <none>
Status:         Pending       ^W Cerca       ^K Taglia      ^J Giustifica  ^C Posizione   M-U Annulla    M-A Marca testo
IP:             R Inserisci   ^\ Sostituisci ^U Incolla     ^T Ortografia  ^_ Vai a riga  M-E Ripeti     M-6 Copia
IPs:            <none>
Controlled By:  ReplicaSet/mongo-express-78fcf796b8
Containers:
  mongo-express:
    Image:      mongo-express
    Port:       8081/TCP
    Host Port:  0/TCP
    Environment:
      ME_CONFIG_MONGODB_ADMINUSERNAME:  <set to the key 'mongo-root-username' in secret 'mongodb-secret'>  Optional: false
      ME_CONFIG_MONGODB_ADMINPASSWORD:  <set to the key 'mongo-root-password' in secret 'mongodb-secret'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2mcpd (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  kube-api-access-2mcpd:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                     From               Message
  ----     ------            ----                    ----               -------
  Warning  FailedScheduling  2m40s (x112 over 113m)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
[root@master-node ~]# kubectl logs mongo-express-78fcf796b8-wzgvx -c container_name
error: container container_name is not valid for pod mongo-express-78fcf796b8-wzgvx

Guessing you only have a single master node. Something to know about kubeadm is prevents scheduling on the control plane by default. That document provides a command to untaint all your nodes.

If you run kubectl describe node NODE_NAME you will probably see a label or annotation with node-role.kubernetes.io/master as the indicator preventing scheduling.

Yes i am following a lab for a 3 node cluster and i am working on the first node, I understand what you said so what should i do to have READY 1/1 Thanks in advance


[root@master-node ~]# kubectl get node
NAME          STATUS   ROLES                  AGE   VERSION
master-node   Ready    control-plane,master   26h   v1.21.3
[root@master-node ~]# kubectl describe node master-node
Name:               master-node
Roles:              control-plane,master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=master-node
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node-role.kubernetes.io/master=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"da:25:dc:0c:af:35"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.30.100
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 26 Jul 2021 20:34:36 -0400
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  master-node
  AcquireTime:     <unset>
  RenewTime:       Tue, 27 Jul 2021 22:43:49 -0400
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason
        Message
  ----                 ------  -----------------                 ------------------                ------
        -------
  NetworkUnavailable   False   Tue, 27 Jul 2021 13:59:48 -0400   Tue, 27 Jul 2021 13:59:48 -0400   FlannelIsUp
        Flannel is running on this node
  MemoryPressure       False   Tue, 27 Jul 2021 22:41:01 -0400   Mon, 26 Jul 2021 20:34:36 -0400   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Tue, 27 Jul 2021 22:41:01 -0400   Mon, 26 Jul 2021 20:34:36 -0400   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Tue, 27 Jul 2021 22:41:01 -0400   Mon, 26 Jul 2021 20:34:36 -0400   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Tue, 27 Jul 2021 22:41:01 -0400   Mon, 26 Jul 2021 20:44:55 -0400   KubeletReady
        kubelet is posting ready status
Addresses:
  InternalIP:  192.168.30.100
  Hostname:    master-node
Capacity:
  cpu:                2
  ephemeral-storage:  6334Mi
  hugepages-2Mi:      0
  memory:             3728016Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  5977512336
  hugepages-2Mi:      0
  memory:             3625616Ki
  pods:               110
System Info:
  Machine ID:                 7eb80ce971604dc588326782f26f95f0
  System UUID:                c122c5ae-fb8e-c043-b7c6-8c3eed40c427
  Boot ID:                    2f44c701-4441-4916-83f6-b52ab569f262
  Kernel Version:             4.18.0-315.el8.x86_64
  OS Image:                   CentOS Stream 8
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://20.10.7
  Kubelet Version:            v1.21.3
  Kube-Proxy Version:         v1.21.3
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (8 in total)
  Namespace                   Name                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                   ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-558bd4d5db-c7gb9               100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     26h
  kube-system                 coredns-558bd4d5db-gb9ct               100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     26h
  kube-system                 etcd-master-node                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         26h
  kube-system                 kube-apiserver-master-node             250m (12%)    0 (0%)      0 (0%)           0 (0%)         26h
  kube-system                 kube-controller-manager-master-node    200m (10%)    0 (0%)      0 (0%)           0 (0%)         26h
  kube-system                 kube-flannel-ds-wjgb7                  100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      25h
  kube-system                 kube-proxy-ccl8k                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         26h
  kube-system                 kube-scheduler-master-node             100m (5%)     0 (0%)      0 (0%)           0 (0%)         26h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                950m (47%)  100m (5%)
  memory             290Mi (8%)  390Mi (11%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:              <none>