A node died, brought back up, but none of the pods rescheduled on it

Running a 3 node cluster on AWS EKS 1.13.7. One of the backing ec2 instances had some sort of issue, so I stopped it, and the AWS auto-scaling group kicked in and created a new ec2 instance. All good, except now none of my pods are running on the new ec2 instance. All the pods are running on the two other nodes. How can I have Kubernetes reschedule pods evenly again across all nodes? This seems like something Kubernetes should handel.

It is my experience that they will not reschedule on their own. Here is an article I read the explains scheduling https://itnext.io/keep-you-kubernetes-cluster-balanced-the-secret-to-high-availability-17edf60d9cb7