How to stop traffic to a specific node (or pod)?

If my service application failed in a pod, but the pod is still in running state, (pod itself and the node are not failing), can I stop traffic to this node, kind of isolate it from my cluster for investigation?
I don’t want to restart the pod/node or delete it because I want to keep it running and ssh into the pod and find root cause of the issue, but I also don’t want to have it keep receiving traffic because something is failing and it’s affecting my service. How can I achieve it?

Thanks in advance!

Change the labels so it no longer matches the service?

@thockin Thank you so much for your timely reply.
However I tried to change the label but things are not working as I expected.

In my deployment, I defined a podAntiAffinity rule as the following (in each node, pods cannot have same value for label “releaseTimestamp”, this is how I limit only one pod per node) :

apiVersion: apps/v1beta2
kind: Deployment
metadata:
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myAppName
  template:
    metadata:
      labels:
        app: myAppName
        releaseTimestamp: {{ required "The releaseTimestamp parameter must be provided." myReleaseTimestamp }}
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: releaseTimestamp
                operator: In
                values:
                - myReleaseTimestamp 
            topologyKey: "kubernetes.io/hostname"

I originally have three nodes, each has a pod in it (app: myAppName, releaseTimestamp: myReleaseTimestamp)
Then I changed Pod1(on Node 1) labels value to app: test, releaseTimestamp: test.
A new pod is then created, this is expected.
But the pod is stuck in pending state because it was failed scheduling.
Error message is: 0/3 nodes are available: 3 node(s) didn’t match pod affinity/anti-affinity, 3 node(s) didn’t satisfy existing pods anti-affinity rules.

I don’t understand why the new pod is not scheduled on Node 1, because I already changed value of the releaseTimestamp label and there is no pods with value myReleaseTimestamp in the node now. The new pod should be able to scheduled to that node and it seems not against the podAntiAffinity.