Kubernetes : 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available

I’m experiencing this issue: Pending Pod due to nodes didn't match pod anti-affinity rules

    NAME                                                READY   STATUS    RESTARTS      AGE   IP           NODE                        NOMINATED NODE   READINESS GATES
    demo-dc1-default-sts-0                              0/2     Pending   0             38m   <none>       <none>                      <none>           <none>
    demo-dc1-default-sts-1                              1/2     Running   0             62m   10.244.0.6   k8ssandra-0-control-plane   <none>           <none>
    demo-dc1-default-sts-2                              1/2     Running   0             17h   10.244.1.9   k8ssandra-0-worker          <none>           <none>
    k8ssandra-operator-7998574dd5-567qq                 1/1     Running   1 (72m ago)   18h   10.244.1.7   k8ssandra-0-worker          <none>           <none>
    k8ssandra-operator-cass-operator-7599b94d9d-s7nrr   1/1     Running   2 (72m ago)   18h   10.244.1.2   k8ssandra-0-worker          <none>           <none>

Output of kubectl describe pod :

    root@k8s-eu-1-master:~# kubectl describe pod demo-dc1-default-sts-0 -n k8ssandra-operator
    Name:             demo-dc1-default-sts-0
    Namespace:        k8ssandra-operator
    Priority:         0
    Service Account:  default
    Node:             <none>
    Labels:           app.kubernetes.io/created-by=cass-operator
                      app.kubernetes.io/instance=cassandra-demo
                      app.kubernetes.io/managed-by=cass-operator
                      app.kubernetes.io/name=cassandra
                      app.kubernetes.io/version=4.0.1
                      cassandra.datastax.com/cluster=demo
                      cassandra.datastax.com/datacenter=dc1
                      cassandra.datastax.com/node-state=Ready-to-Start
                      cassandra.datastax.com/rack=default
                      controller-revision-hash=demo-dc1-default-sts-7676c86675
                      statefulset.kubernetes.io/pod-name=demo-dc1-default-sts-0
    Annotations:      k8ssandra.io/inject-secret: [{"name":"demo-superuser","path":"/etc/secrets/demo-superuser","containers":["cassandra"]}]
    Status:           Pending
    IP:               
    IPs:              <none>
    Controlled By:    StatefulSet/demo-dc1-default-sts
    Init Containers:
      server-config-init:
        Image:      datastax/cass-config-builder:1.0-ubi7
        Port:       <none>
        Host Port:  <none>
        Limits:
          cpu:     1
          memory:  384M
        Requests:
          cpu:     1
          memory:  256M
        Environment:
          POD_IP:                      (v1:status.podIP)
          HOST_IP:                     (v1:status.hostIP)
          USE_HOST_IP_FOR_BROADCAST:  false
          RACK_NAME:                  default
          PRODUCT_VERSION:            4.0.1
          PRODUCT_NAME:               cassandra
          CONFIG_FILE_DATA:           {"cassandra-env-sh":{"additional-jvm-opts":["-Dcassandra.allow_alter_rf_during_range_movement=true","-Dcassandra.system_distributed_replication=dc1:3","-Dcassandra.jmx.authorizer=org.apache.cassandra.auth.jmx.AuthorizationProxy","-Djava.security.auth.login.config=$CASSANDRA_HOME/conf/cassandra-jaas.config","-Dcassandra.jmx.remote.login.config=CassandraLogin","-Dcom.sun.management.jmxremote.authenticate=true"]},"cassandra-yaml":{"authenticator":"PasswordAuthenticator","authorizer":"CassandraAuthorizer","num_tokens":16,"role_manager":"CassandraRoleManager"},"cluster-info":{"name":"demo","seeds":"demo-seed-service,demo-dc1-additional-seed-service"},"datacenter-info":{"graph-enabled":0,"name":"dc1","solr-enabled":0,"spark-enabled":0},"jvm-server-options":{"initial_heap_size":512000000,"max_heap_size":512000000},"jvm11-server-options":{"garbage_collector":"G1GC"}}
        Mounts:
          /config from server-config (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cfdtc (ro)
    Containers:
      cassandra:
        Image:       k8ssandra/cass-management-api:4.0.1
        Ports:       9042/TCP, 9142/TCP, 7000/TCP, 7001/TCP, 7199/TCP, 8080/TCP, 9103/TCP, 9000/TCP
        Host Ports:  0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
        Liveness:    http-get http://:8080/api/v0/probes/liveness delay=15s timeout=10s period=15s #success=1 #failure=3
        Readiness:   http-get http://:8080/api/v0/probes/readiness delay=20s timeout=10s period=10s #success=1 #failure=3
        Environment:
          METRIC_FILTERS:           deny:org.apache.cassandra.metrics.Table deny:org.apache.cassandra.metrics.table allow:org.apache.cassandra.metrics.table.live_ss_table_count allow:org.apache.cassandra.metrics.Table.LiveSSTableCount allow:org.apache.cassandra.metrics.table.live_disk_space_used allow:org.apache.cassandra.metrics.table.LiveDiskSpaceUsed allow:org.apache.cassandra.metrics.Table.Pending allow:org.apache.cassandra.metrics.Table.Memtable allow:org.apache.cassandra.metrics.Table.Compaction allow:org.apache.cassandra.metrics.table.read allow:org.apache.cassandra.metrics.table.write allow:org.apache.cassandra.metrics.table.range allow:org.apache.cassandra.metrics.table.coordinator allow:org.apache.cassandra.metrics.table.dropped_mutations
          POD_NAME:                 demo-dc1-default-sts-0 (v1:metadata.name)
          NODE_NAME:                 (v1:spec.nodeName)
          DS_LICENSE:               accept
          DSE_AUTO_CONF_OFF:        all
          USE_MGMT_API:             true
          MGMT_API_EXPLICIT_START:  true
          DSE_MGMT_EXPLICIT_START:  true
        Mounts:
          /config from server-config (rw)
          /opt/management-api/configs from metrics-agent-config (rw)
          /var/lib/cassandra from server-data (rw)
          /var/log/cassandra from server-logs (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cfdtc (ro)
      server-system-logger:
        Image:      k8ssandra/system-logger:v1.18.1
        Port:       <none>
        Host Port:  <none>
        Limits:
          memory:  128M
        Requests:
          cpu:     100m
          memory:  64M
        Environment:
          POD_NAME:         demo-dc1-default-sts-0 (v1:metadata.name)
          NODE_NAME:         (v1:spec.nodeName)
          CLUSTER_NAME:     demo
          DATACENTER_NAME:  dc1
          RACK_NAME:         (v1:metadata.labels['cassandra.datastax.com/rack'])
          NAMESPACE:        k8ssandra-operator (v1:metadata.namespace)
        Mounts:
          /opt/management-api/configs from metrics-agent-config (rw)
          /var/lib/vector from vector-lib (rw)
          /var/log/cassandra from server-logs (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cfdtc (ro)
    Conditions:
      Type           Status
      PodScheduled   False 
    Volumes:
      server-data:
        Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:  server-data-demo-dc1-default-sts-0
        ReadOnly:   false
      server-config:
        Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
        Medium:     
        SizeLimit:  <unset>
      server-logs:
        Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
        Medium:     
        SizeLimit:  <unset>
      vector-lib:
        Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
        Medium:     
        SizeLimit:  <unset>
      metrics-agent-config:
        Type:      ConfigMap (a volume populated by a ConfigMap)
        Name:      demo-dc1-metrics-agent-config
        Optional:  false
      kube-api-access-cfdtc:
        Type:                    Projected (a volume that contains injected data from multiple sources)
        TokenExpirationSeconds:  3607
        ConfigMapName:           kube-root-ca.crt
        ConfigMapOptional:       <nil>
        DownwardAPI:             true
    QoS Class:                   Burstable
    Node-Selectors:              <none>
    Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:
      Type     Reason            Age                 From               Message
      ----     ------            ----                ----               -------
      Warning  FailedScheduling  26m                 default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
      Warning  FailedScheduling  106s (x4 over 21m)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

These are the two nodes :

    root@k8s-eu-1-master:~# kubectl get nodes -n k8ssandra-operator
    NAME                        STATUS   ROLES           AGE   VERSION
    k8ssandra-0-control-plane   Ready    control-plane   18h   v1.25.3
    k8ssandra-0-worker          Ready    <none>          18h   v1.25.3

node k8ssandra-0-control-plane :

    root@k8s-eu-1-master:~# kubectl get nodes -n k8ssandra-operator
    NAME                        STATUS   ROLES           AGE   VERSION
    k8ssandra-0-control-plane   Ready    control-plane   18h   v1.25.3
    k8ssandra-0-worker          Ready    <none>          18h   v1.25.3
    root@k8s-eu-1-master:~# 
    root@k8s-eu-1-master:~# kubectl describe node k8ssandra-0-control-plane -n k8ssandra-operator
    Name:               k8ssandra-0-control-plane
    Roles:              control-plane
    Labels:             beta.kubernetes.io/arch=amd64
                        beta.kubernetes.io/os=linux
                        kubernetes.io/arch=amd64
                        kubernetes.io/hostname=k8ssandra-0-control-plane
                        kubernetes.io/os=linux
                        node-role.kubernetes.io/control-plane=
                        node.kubernetes.io/exclude-from-external-load-balancers=
    Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
                        node.alpha.kubernetes.io/ttl: 0
                        volumes.kubernetes.io/controller-managed-attach-detach: true
    CreationTimestamp:  Fri, 10 Nov 2023 18:44:26 +0100
    Taints:             <none>
    Unschedulable:      false
    Lease:
      HolderIdentity:  k8ssandra-0-control-plane
      AcquireTime:     <unset>
      RenewTime:       Sat, 11 Nov 2023 12:50:19 +0100
    Conditions:
      Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
      ----             ------  -----------------                 ------------------                ------                       -------
      MemoryPressure   False   Sat, 11 Nov 2023 12:48:59 +0100   Fri, 10 Nov 2023 18:44:21 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
      DiskPressure     False   Sat, 11 Nov 2023 12:48:59 +0100   Fri, 10 Nov 2023 18:44:21 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
      PIDPressure      False   Sat, 11 Nov 2023 12:48:59 +0100   Fri, 10 Nov 2023 18:44:21 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
      Ready            True    Sat, 11 Nov 2023 12:48:59 +0100   Fri, 10 Nov 2023 18:44:50 +0100   KubeletReady                 kubelet is posting ready status
    Addresses:
      InternalIP:  172.19.0.2
      Hostname:    k8ssandra-0-control-plane
    Capacity:
      cpu:                10
      ephemeral-storage:  2061040144Ki
      hugepages-1Gi:      0
      hugepages-2Mi:      0
      memory:             61714452Ki
      pods:               110
    Allocatable:
      cpu:                10
      ephemeral-storage:  2061040144Ki
      hugepages-1Gi:      0
      hugepages-2Mi:      0
      memory:             61714452Ki
      pods:               110
    System Info:
      Machine ID:                 7fcb4f58ddde4f989494c54b01582c15
      System UUID:                d2f2c6e4-0f58-4c3c-aaf6-f7e04c1add74
      Boot ID:                    9ec8e4bd-a59c-488b-a90c-82b8b4169a50
      Kernel Version:             5.15.0-88-generic
      OS Image:                   Ubuntu 22.04.1 LTS
      Operating System:           linux
      Architecture:               amd64
      Container Runtime Version:  containerd://1.6.9
      Kubelet Version:            v1.25.3
      Kube-Proxy Version:         v1.25.3
    PodCIDR:                      10.244.0.0/24
    PodCIDRs:                     10.244.0.0/24
    ProviderID:                   kind://docker/k8ssandra-0/k8ssandra-0-control-plane
    Non-terminated Pods:          (10 in total)
      Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
      ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
      k8ssandra-operator          demo-dc1-default-sts-1                               1 (10%)       1 (10%)     256M (0%)        384M (0%)      57m
      kube-system                 coredns-565d847f94-4c9wd                             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     18h
      kube-system                 coredns-565d847f94-7xph8                             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     18h
      kube-system                 etcd-k8ssandra-0-control-plane                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         67m
      kube-system                 kindnet-crr52                                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      18h
      kube-system                 kube-apiserver-k8ssandra-0-control-plane             250m (2%)     0 (0%)      0 (0%)           0 (0%)         67m
      kube-system                 kube-controller-manager-k8ssandra-0-control-plane    200m (2%)     0 (0%)      0 (0%)           0 (0%)         18h
      kube-system                 kube-proxy-5hs7d                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         18h
      kube-system                 kube-scheduler-k8ssandra-0-control-plane             100m (1%)     0 (0%)      0 (0%)           0 (0%)         18h
      local-path-storage          local-path-provisioner-684f458cdd-k8llm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18h
    Allocated resources:
      (Total limits may be over 100 percent, i.e., overcommitted.)
      Resource           Requests        Limits
      --------           --------        ------
      cpu                1950m (19%)     1100m (11%)
      memory             560087040 (0%)  792944640 (1%)
      ephemeral-storage  0 (0%)          0 (0%)
      hugepages-1Gi      0 (0%)          0 (0%)
      hugepages-2Mi      0 (0%)          0 (0%)
    Events:              <none>

node k8ssandra-0-worker :

    root@k8s-eu-1-master:~# kubectl describe node k8ssandra-0-worker -n k8ssandra-operator
    Name:               k8ssandra-0-worker
    Roles:              <none>
    Labels:             beta.kubernetes.io/arch=amd64
                        beta.kubernetes.io/os=linux
                        kubernetes.io/arch=amd64
                        kubernetes.io/hostname=k8ssandra-0-worker
                        kubernetes.io/os=linux
                        topology.kubernetes.io/zone=region1-zone1
    Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
                        node.alpha.kubernetes.io/ttl: 0
                        volumes.kubernetes.io/controller-managed-attach-detach: true
    CreationTimestamp:  Fri, 10 Nov 2023 18:44:47 +0100
    Taints:             <none>
    Unschedulable:      false
    Lease:
      HolderIdentity:  k8ssandra-0-worker
      AcquireTime:     <unset>
      RenewTime:       Sat, 11 Nov 2023 12:51:22 +0100
    Conditions:
      Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
      ----             ------  -----------------                 ------------------                ------                       -------
      MemoryPressure   False   Sat, 11 Nov 2023 12:49:01 +0100   Fri, 10 Nov 2023 18:44:46 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
      DiskPressure     False   Sat, 11 Nov 2023 12:49:01 +0100   Fri, 10 Nov 2023 18:44:46 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
      PIDPressure      False   Sat, 11 Nov 2023 12:49:01 +0100   Fri, 10 Nov 2023 18:44:46 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
      Ready            True    Sat, 11 Nov 2023 12:49:01 +0100   Fri, 10 Nov 2023 18:44:57 +0100   KubeletReady                 kubelet is posting ready status
    Addresses:
      InternalIP:  172.19.0.3
      Hostname:    k8ssandra-0-worker
    Capacity:
      cpu:                10
      ephemeral-storage:  2061040144Ki
      hugepages-1Gi:      0
      hugepages-2Mi:      0
      memory:             61714452Ki
      pods:               110
    Allocatable:
      cpu:                10
      ephemeral-storage:  2061040144Ki
      hugepages-1Gi:      0
      hugepages-2Mi:      0
      memory:             61714452Ki
      pods:               110
    System Info:
      Machine ID:                 affd0d73f2d64762a0c5d2bcf9ab9dd8
      System UUID:                aad75b1b-f55c-4e3e-9cda-84c4c5de9caa
      Boot ID:                    9ec8e4bd-a59c-488b-a90c-82b8b4169a50
      Kernel Version:             5.15.0-88-generic
      OS Image:                   Ubuntu 22.04.1 LTS
      Operating System:           linux
      Architecture:               amd64
      Container Runtime Version:  containerd://1.6.9
      Kubelet Version:            v1.25.3
      Kube-Proxy Version:         v1.25.3
    PodCIDR:                      10.244.1.0/24
    PodCIDRs:                     10.244.1.0/24
    ProviderID:                   kind://docker/k8ssandra-0/k8ssandra-0-worker
    Non-terminated Pods:          (8 in total)
      Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
      ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
      cert-manager                cert-manager-775b959d64-hz8bv                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         18h
      cert-manager                cert-manager-cainjector-97795797f-hmz79              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18h
      cert-manager                cert-manager-webhook-979c74b9c-pblnd                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18h
      k8ssandra-operator          demo-dc1-default-sts-2                               1 (10%)       1 (10%)     256M (0%)        384M (0%)      17h
      k8ssandra-operator          k8ssandra-operator-7998574dd5-567qq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         18h
      k8ssandra-operator          k8ssandra-operator-cass-operator-7599b94d9d-s7nrr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18h
      kube-system                 kindnet-hr6jp                                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      18h
      kube-system                 kube-proxy-27g5d                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         18h
    Allocated resources:
      (Total limits may be over 100 percent, i.e., overcommitted.)
      Resource           Requests        Limits
      --------           --------        ------
      cpu                1100m (11%)     1100m (11%)
      memory             308428800 (0%)  436428800 (0%)
      ephemeral-storage  0 (0%)          0 (0%)
      hugepages-1Gi      0 (0%)          0 (0%)
      hugepages-2Mi      0 (0%)          0 (0%)
    Events:              <none>

The Pending Persistent Volume Claims is waiting for the Pending Pod to be scheduled:

    root@k8s-eu-1-master:~# kubectl get pvc -n k8ssandra-operator -o wide
    NAME                                 STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
    server-data-demo-dc1-default-sts-0   Pending                                                                        standard       67m   Filesystem
    server-data-demo-dc1-default-sts-1   Bound     pvc-3bb18f4b-611c-42f6-bfd3-798cdf8d76af   5Gi        RWO            standard       18h   Filesystem
    server-data-demo-dc1-default-sts-2   Bound     pvc-47e9d42e-2f67-4901-a0a4-e21e8928d66c   5Gi        RWO            standard       18h   Filesystem


    root@k8s-eu-1-master:~# kubectl describe pvc server-data-demo-dc1-default-sts-0 -n k8ssandra-operator
    Name:          server-data-demo-dc1-default-sts-0
    Namespace:     k8ssandra-operator
    StorageClass:  standard
    Status:        Pending
    Volume:        
    Labels:        app.kubernetes.io/created-by=cass-operator
                   app.kubernetes.io/instance=cassandra-demo
                   app.kubernetes.io/managed-by=cass-operator
                   app.kubernetes.io/name=cassandra
                   app.kubernetes.io/version=4.0.1
                   cassandra.datastax.com/cluster=demo
                   cassandra.datastax.com/datacenter=dc1
                   cassandra.datastax.com/rack=default
    Annotations:   <none>
    Finalizers:    [kubernetes.io/pvc-protection]
    Capacity:      
    Access Modes:  
    VolumeMode:    Filesystem
    Used By:       demo-dc1-default-sts-0
    Events:
      Type    Reason               Age                   From                         Message
      ----    ------               ----                  ----                         -------
      Normal  WaitForPodScheduled  4m7s (x262 over 69m)  persistentvolume-controller  waiting for pod demo-dc1-default-sts-0 to be scheduled

This is the K8ssandraCluster.yaml :

    # https://docs.k8ssandra.io/install/local/single-cluster-helm/#deploy-the-k8ssandracluster
    # https://docs.k8ssandra.io/tasks/scale/add-nodes/#create-a-cluster
    
    apiVersion: k8ssandra.io/v1alpha1
    kind: K8ssandraCluster
    metadata:
       name: my-k8ssandra
    spec:
       cassandra:
          serverVersion: "4.0.5"
          datacenters:
             - metadata:
                  name: dc1
               size: 6
          softPodAntiAffinity: true
          # Resources must be specified for each Cassandra node when using softPodAntiAffinity
          resources:
             requests:
                cpu: 1
                memory: 2Gi
             limits:
                cpu: 2
                memory: 2Gi
          # It is also recommended to set the JVM heap size
          config:
             jvmOptions:
                heap_initial_size: 1G
                heap_max_size: 1G
          storageConfig:
             cassandraDataVolumeClaimSpec:
                storageClassName: standard
                accessModes:
                   - ReadWriteOnce
                resources:
                   requests:
                      storage: 5Gi

What’s the root cause of the problem? How to make the pod change its state to Running State ?

Hi , Raphy

The root cause of the problem is that the pods cannot be scheduled because they do not respect the anti-similarity rules defined in your Kubernetes cluster. Anti-affinity rules prevent multiple instances of a pod or pods with specific labels from being programmed into the same node or nodes that violate certain rules.

I think for the pods to change their state to Running, you can do the following:

Review Anti-Affinity Rules: Review the anti-affinity rules defined for your pod. These rules may be set in the pod specification or as part of your deployment settings.

Node Affinity and Anti-Affinity: Check if there are specific rules that prevent these pods from being programmed into existing nodes. If necessary, set anti-affinity rules to allow scheduling on existing nodes.

Node Availability: Ensure that the nodes you expect the pods to run on are ready and available. You can use the kubectl get nodes command to check the status of your nodes.

Pod Priority and Preemption: If you are using pod priority and preemption, make sure that the priority levels are set correctly and that the preemption policies allow necessary evictions.

Logs and events: Check the scheduler, controller-manager, and kubelet logs for any error messages related to the scheduling problem. Also, review events related to pods for more details on why they are not scheduled.

Update pod profile: If you have configured anti-similarity rules, update the pod profile accordingly and apply the changes using kubectl apply.

You should be able to transition the pods into Running mode by addressing issues with anti-similarity rules and ensuring that the nodes meet the pod’s scheduling requirements.

Hi Jamall!

Thank you for your suggestions and this useful checklist

I’ve already addresses all these points, with the help of another kind guy.
But we agreed we don’t understand the root cause of the problem
Would you be so kind in having a look at what we have done so far? Slack

Hi, Raphy
I saw the channel slack .
I suggest you create everything again step by step .