[Help] K8s cluster with --cloud-provider="aws"

Asking for help? Comment out what you need so we can get more information to help you!

Cluster information:

Kubernetes version: v1.15.1
Cloud being used: aws
Installation method: kubeadm
Host OS: Ubuntu 16.04

Hi All,
I’m having problems running kubelet with --cloud-provider==“aws”. I’ve done all the things required, such as:

  1. passing extraArgs in kubeadm init config
  2. giving tags to all AWS resources and assigning roles with policies to instances.

Still kubelet is giving me the following error:
Dec 16 10:47:04 ip-172-31-2-73 kubelet[23467]: E1216 10:47:04.798125 23467 kubelet.go:2248] node “ip-172-31-2-73.us-east-2.compute.internal” not found
Dec 16 10:47:04 ip-172-31-2-73 kubelet[23467]: E1216 10:47:04.832558 23467 controller.go:115] failed to ensure node lease exists, will retry in 7s, error: Get https://172.31.2.73:6443/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/ip-172-31-2-73.us-east-2.compute.internal?timeout=10s: dial tcp 172.31.2.73:6443: connect: connection refused
Dec 16 10:47:04 ip-172-31-2-73 kubelet[23467]: E1216 10:47:04.898307 23467 kubelet.go:2248] node “ip-172-31-2-73.us-east-2.compute.internal” not found

Basically, node not found is the main error. I don’t know why it’s not able to find the node.
Please help.