Sharing from one of my recent article I wrote exploring how the traffic flow from internet to container via istio
Let’s walk through the traffic flow from the internet to your application containers in GKE using Istio. This will include how the traffic passes through various components such as NodePort, Istio Gateway, VirtualService, kube-proxy, Kubernetes services, sidecar (Envoy proxy), and ultimately reaches the application.
Given the configuration provided for the Istio Gateway and VirtualServices, this flow applies to both frontend.mydomain.com and backend.mydomain.com.
Traffic Flow
- Client Request: The client sends an HTTPS request to either frontend.mydomain.com or backend.mydomain.com.
- Passes through WAF → DDOS, SQL Injection, cross site scripting etc
- Cloud Load Balancer: Routes the traffic to the appropriate GKE node via a NodePort.
- Istio Ingress Gateway: Handles mutual TLS (mTLS) authentication and decrypts the traffic.
- VirtualService: Based on the host (frontend.mydomain.com or backend.mydomain.com), the VirtualService routes the traffic to the corresponding Kubernetes service.
- Kube-proxy and Kubernetes Service: The kube-proxy forwards the traffic from the ClusterIP service to the appropriate application pod.
- Envoy Sidecar: The Envoy proxy in the pod processes the request and forwards it to the application container.
- Application: The application processes the request and sends a response back, following the same path in reverse.
Reverse traffic flow
more details on my medium article