I have exposed service as NodePort in Kubernetes and applied session affinity to that service. When the client connects to the service for the first time the session affinity comes into action and maintain the session for a client but after timeoutSeconds when the client connects again with service session affinity doesn’t come into action which leads to inappropriate behaviour. I am using minikube to configure the cluster running on AWS ec2 instance. k8s version is Major:“1”, Minor:“18”. Actually I have exposed service web-svc on node port and client connect to this service through AWS public IP e.g AWS-instance-ip:NodePort. The client first connects on UDP port sending single packet for authorization, once authorized the pod say POD-A insert some iptables rules to allow particular client, after authorization client immediately establish a TCP connection with service but the problem is this request is sent to another pod say POD-B as I have replica=3 and on POD-B, iptables rules are not present for a particular client.
When I delete my Deployment and Service altogether like kubectl delete -f my-deployment.yaml
and then create it again through kubectl apply -f my-deployment.yaml
i get the expected behaviour both UDP and TCP connection are going to the same pod but after sessionAffinity mentioned timeout when client connect again the UDP and TCP connection doesn’t go to the same pod.
Kube-proxy mode is IPVS
Service Snippet:
apiVersion: v1
kind: Service
metadata:
name: web-svc
spec:
type: NodePort
selector:
app: web-app
externalTrafficPolicy: Local
sessionAffinity: "ClientIP"
sessionAffinityConfig:
clientIP:
timeoutSeconds: 240
ports:
- port: 30573
nodePort: 30573
name: tcp-30573
protocol: TCP
- port: 30375
nodePort: 30375
name: udp-30375
protocol: UDP