GKE + Cloud NAT without private cluster

Hi all, i was trying to use Cloud NAT to get a stable egress IP for the outbound connections from my pods.

The cluster is not a private cluster but a standard cluster on the default VPC network but from the official docs look like is possible to nat the pods traffic

Regular (non-private) GKE clusters assign each node an external IP address, so such clusters cannot use Cloud NAT to send packets from the node’s primary interface. Pods can still use Cloud NAT if they send packets with source IP addresses set to the pod IP. ( Cloud NAT overview  |  Google Cloud )

So i gave it a try but the pod egress traffic is not captured by Cloud NAT.
I was thinking that maybe this happend because k8s masquerade the pod ips by default. So i was thinking to enable the network policy on k8s then add a custom masquerade policy to disable the masquerading on the pod net ips:

nonMasqueradeCIDRs:

  • 10.44.0.0/14
    resyncInterval: 60s

In not sure that this will work, any advice or alternative idea to get a stable ip for egress connections?

Thank you

Cluster information:

Kubernetes version: 1.11.8-gke.6
Cloud being used: GKE
Installation method:
Host OS:
CNI and version:
CRI and version:

You can format your yaml by highlighting it and pressing Ctrl-Shift-C, it will make your output easier to read.

–!>

Hi, there. I’ve stuck in the same situation. Could you please share any updates on your effort to make it work?

Hi, i didn’t solve it. I mean, i had a strict schedule so we changed the beahaviour of our app to allow the multi ip egress because the Cloud Nat would not capture the traffic and custom solutions didn’t grant us stability.