How can I stop restarting completed job pod after scale down

I’m executing indexed job with completions=30, parallelism=30. Its pods have big memory request so need one node for each pod. Some pods complete fastly. After completion of these pods, scale down of nodes happen, then completed pods are evicted and restarted on another node. How can I stop this behaviour? Setting pod disruption budget or "cluster-autoscaler.kubernetes.io/safe-to-evict": "false" was not successful.

Cluster information:

Kubernetes version: 1.24.3
Cloud being used: azure kubernetes service
Installation method: Helm
Host OS: Linux
CNI and version: Sorry, I’m not sure
CRI and version: Sorry, I’m not sure