Connect Kubernetes cluster on GCP and keep different projects separated

I want to setup several GKE clusters like here. So essentially, I would first create a VPC

gcloud compute networks create ${network_name} --subnet-mode=custom

and then the subnets

gcloud compute networks subnets create ${subnet_1} \
    --region=${region_1} \
    --network=${network_name} \
    --range=10.0.0.0/16 \
    --secondary-range pods=10.10.0.0/16,services=10.100.0.0/16

gcloud compute networks subnets create ${subnet_2} \
    --region=${region_2} \
    --network=${network_name} \
    --range=10.1.0.0/16 \
    --secondary-range pods=10.11.0.0/16,services=10.101.0.0/16

gcloud compute networks subnets create ${subnet_3} \
    --region=${region_3} \
    --network=${network_name} \
    --range=10.2.0.0/16 \
    --secondary-range pods=10.12.0.0/16,services=10.102.0.0/16

and then three GKE clusters:

gcloud beta container clusters create ${cluster_1} \
    --region ${region_1} --num-nodes 1 \
    --network ${network_name} --subnetwork ${subnet_1} \
    --cluster-dns clouddns --cluster-dns-scope vpc \
    --cluster-dns-domain ${cluster_domain_1}
    --enable-ip-alias \
    --cluster-secondary-range-name=pods --services-secondary-range-name=services

gcloud beta container clusters create ${cluster_2} \
    --region ${region_2} --num-nodes 1 \
    --network ${network_name} --subnetwork ${subnet_2} \
    --cluster-dns clouddns --cluster-dns-scope vpc \
    --cluster-dns-domain ${cluster_domain_2}
    --enable-ip-alias \
    --cluster-secondary-range-name=pods --services-secondary-range-name=services

gcloud beta container clusters create ${cluster_3} \
    --region ${region_3} --num-nodes 1 \
    --network ${network_name} --subnetwork ${subnet_3} \
    --cluster-dns clouddns --cluster-dns-scope vpc \
    --cluster-dns-domain ${cluster_domain_3}
    --enable-ip-alias \
    --cluster-secondary-range-name=pods --services-secondary-range-name=services

Furthermore, we need the node pools (here only done for cluster no. 1):

gcloud container node-pools create pd --cluster ${cluster_1} --machine-type n1-standard-4 --num-nodes=1 \
    --node-labels=dedicated=pd --node-taints=dedicated=pd:NoSchedule
gcloud container node-pools create tikv --cluster ${cluster_1}  --machine-type n1-highmem-8 --num-nodes=1 \
    --node-labels=dedicated=tikv --node-taints=dedicated=tikv:NoSchedule
gcloud container node-pools create tidb --cluster ${cluster_1}  --machine-type n1-standard-8 --num-nodes=1 \
    --node-labels=dedicated=tidb --node-taints=dedicated=tidb:NoSchedule

Here begins the interesting part: We list the firewalls for cluster subnet no. 1:

gcloud compute firewall-rules list --filter='name~gke-${cluster_1}-.*-all'

and we allow incoming traffic from the other clusters

gcloud compute firewall-rules update ${firewall_rule_name} --source-ranges 10.10.0.0/16,10.11.0.0/16,10.12.0.0/16

If we repeat this for all clusters, then they are interconnected, i.e., we can access a service from cluster A in cluster B.

Now, I am facing the following situation. Say, we have project A and B and one cluster C.

I can use NetworkPolicies to ensure that the resources of the namespaces of project A (A1, A2, A3) can communicate with one another, as can the resources of the namespaces of project B (B1, B2), but there is no communication possible between, say, A1 and B2.

Now, my question is, how can we make that possible for multiple clusters that are connected as above? So assume, we have clusters C1, C2, C3 and for project A we have namespaces A1_C1, A2_C1, A3_C2, A4_C3, A5_C3 (in the respective cluster) and for project B we have namespaces B1_C1, B2_C2, B3_C2, B4_C3.

How can I make it possible, that all the resources of the namespaces associated to project A can communicate, say, A1_C1 to A3_C2, same for project B, but there is no communication possible between projects, say between resources of A1_C1 and B1_C1 or B2_C2?

Is such a thing possible? If so, how?

When you say “project” I assume you don’t mean GCP project but more like a group of Namespaces?

What you want, I think, is multicluster network policy, which sadly is still just a proposal in kubernetes.

You can use things like Istio or ASM to achieve this today, and we are looking at how to enable various forms of access control across clusters as a k8s concept in the not so distant future.