How to Schedule a Pod in different Node in GKE

I have cluster running in gke standard mode

Here is the details

PROD Env GKE Standard Cluster running with 2 Node Pool and 9 Nodes

default-pool        1.28.3-gke.1286000    5 (1 - 2 per zone)    e2-standard-2    
xxxxxx-node-pool    1.28.11-gke.1019001   4 (2 per zone)    e2-highcpu-4 

Problem is im using 2 PODs in different node it was running but now when developer push code its not taking new changes code due to its running old-date tag and with latest it deploy im using CI/CD github action

how can i achive using automatically deployment in github action but that 2 pod should deploy only this is node-pool i.e., xxxxxxx-node-pool

here is my yaml file

apiVersion: apps/v1
kind: Deployment
metadata:
 name: devops-test
 namespace: devops
spec:
 replicas: 2
 selector:
   matchLabels:
     app: devops-test
 template:
   metadata:
     labels:
       app: devops-test
   spec:
     containers:
       - name: devops-test
         image: us-south1-docker.pkg.dev/xxxxxxxxxxxxxxx:2024-07-29-1532
         imagePullPolicy: Always
         ports:
         - containerPort: 8080
         resources:
          requests: # Minimum amount of resources requested
             cpu: 1000m
             memory: 1024Mi
          limits: # Maximum amount of resources requested
             cpu: 2000m
             memory: 2048Mi
    #  nodeSelector:
    #    pod: engine         
     affinity:
      nodeAffinity:
       requiredDuringSchedulingIgnoredDuringExecution:
         nodeSelectorTerms:
         - matchExpressions:
           - key: pod
             operator: In
             values:
             - engine
       preferredDuringSchedulingIgnoredDuringExecution:
       - weight: 1
         preference:
           matchExpressions:
           - key: pod
             operator: In
             values:
             - engine
       # - weight: 50
       #   preference:
       #     matchExpressions:
       #     - key: label-2
       #       operator: In
       #       values:
       #       - key-2              
---
 apiVersion: v1
 kind: Service
 metadata:
  name: devops-test
  namespace: devops
  annotations:
  labels:
    app: devops-test
 spec:
  type: ClusterIP
  selector:
   app: devops-test
  ports:
   - port: 8080
     targetPort: 8080
     protocol: TCP

And these are the 2 nodes which i want to deploy my 2 pods in these 2 nodes

gke-devops-prod-stan-devops-node-pool-8836938a-9gpw
gke-devops-prod-stan-devops-node-pool-bc9b2051-5zvg

i dont know very much of nodeAffinity or nodeSelector

I have off Autoscaling on both node-pool

I just want deployment to deploy automatically without any issues

This is Github Action deployment into PROD ENV

### Google Kubernetes Engine (GKE) ###
    - name: Deploy to PROD Env (GKE)
      run: |
         gcloud components install gke-gcloud-auth-plugin
         gcloud container clusters get-credentials devpops-prod-standard --region us-central1 --project devops-production
         kubectl set image deployment devops-test devops-test=${{ env.GAR_LOCATION }}-docker.pkg.dev/${{ env.PROD_PROJECT }}/${{ env.REPOSITORY }}/${{ env.IMAGE }}:${{ steps.date.outputs.date }} -n devops

Error Message

Issue didnt solve because when developer commit the new code or changes and Pod is not taking its showing pending when i describe the pod its say 0/9 nodes are available: 2 Insufficient cpu, 2 Insufficient memory, 7 node(s) didn't match Pod's node affinity/selector. preemption: 0/9 nodes are available: 2 No preemption victims found for incoming pod, 7 Preemption is not helpful for scheduling.. Normal NotTriggerScaleUp 97s cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added):

Why do you care about which node a pod runs on? Is there is a specific reason?
Ideally you are not “hand picking” which nodes pods run on.
You can monitor the progress of a rollout like this:

kubectl rollout status deployment/devops-test

It’s also wise to examine the event log to see if there was some reason why a deployment is not completing.

kubectl get events

when you “set” a new image a new replica set is created and the system “rolls” to that … you can examine each replica set for a deployment with this:

kubectl get replicaset