I have 5 GKE Kubernetes clusters, one in each of 5 regions, all within the same VPC. I want to make it possible for a Pod in one cluster/region to make a gRPC request to a Service in another one, without opening that Service to external traffic.
I can not use Internal LoadBalancers, so I was trying to use a regular LoadBalancer with a loadBalancerSourceRanges restriction:
apiVersion: v1 kind: Service metadata: name: my-pod spec: type: LoadBalancer loadBalancerSourceRanges: - 10.0.0.0/8
This does not work, because requests from the pod have the source as their external IP, e.g. 126.96.36.199. Keeping loadBalancerSourceRanges up to date with the external IPs for all regions seems challenging and error-prone. And I’m also concerned about this traffic leaving the VPC in case of snoopers.
Does anyone have any ideas how to implement what I’m describing?
Essentially, I want a central program to manage global assignment of data among my regions, and to do that it needs to inform individual instances that they have a new set of data to manage.
The only other option I can think of within Kubernetes is to use a NodePort service, but that requires coding the IPs of each cluster’s nodes into the configuration and having the client deal with nodes that are down, which seems even worse than the solution above. That could be improved upon by provisioning a GCP Network Load Balancer in each region to distribute traffic to cluster nodes, although that is also made more difficult by having to configure HTTP health checks on a different port than the traffic, separately configured from the Kubernetes Service. I think this is the current best solution, although it’s more management overhead than I’d like.
Any other ideas?
Thank you very much for your thoughts!
“Only client VMs in the region can access the internal TCP/UDP load balancer”
Kubernetes version: v1.12.7-gke.7
Cloud being used: GKE