Multi-node cluster node hostname resolution problems

I set up a multi-node cluster with a private VPC in AWS.

Most things work except when I have to log a pod that is in another node.

microk8s kubectl's control pane API call will use the node’s name (for example ip-10-0-0-206 ), which will fail resolve as it is not set-up.

May I know how can I change kubectl's behaviour to use the node’s private IP within the VPC and the mechanism which microk8s or kubectl use to decide the address to use access a node?

If you can change your /etc/hosts to include the node ips, i will go that route.

1 Like

if i were to automate the creation of the cluster (via terraform or otherwise), how to i ensure it works out of the box? I would it to be configured by default to work, without manually configuring the dns :confused: or is this an AWS setup issue? :o

I use terraform to bootstrap a cluster. Well i also use terraform to add the node names to etc/hosts. Perhaps not the most elegant.

Shameless plug.