AWS EKS Cluster-ip CIDR is overlapping with another VPC CIDR

Asking for help? Comment out what you need so we can get more information to help you!

Cluster information:

Kubernetes version: 1.15
Cloud being used: AWS
Installation method: EKSCTL
Host OS: Amazon Linux
CNI and version: unsure how to check
CRI and version: unsure how to check

I have a created an EKS cluster (using eksctl) using the default VPC (192.168.0.0/16) CIDR and also a separate VPC using a 10.100.0.0/16 CIDR (for my DBs), but for some unknown reason, I now have a Kubernetes cluster ip of 10.100.0.0/16 CIDR range on a production system. How do I change the cluster ip CIDR being used on Kubernetes as it’s clashing with the IP addresses in my DB VPC subnets?

Any help much appreciated. This is on a system that’s going live in a few days so I need a few options within Kubernetes. It would be useful to know where the internal Kubernetes CIDR range of 10.100.0.0/16 is set, and how I can change it. Apparently according to this post - https://forums.aws.amazon.com/thread.jspa?messageID=859958, EKS actually picks a cluster IP range (either 10.100 or 172.20) depending on the range of the VPC that I told it to use, but in my case it picked one that clashed with my DBs.

Any help much appreciated.

Jat

1 Like