Kubernetes session affinity after timeout doesn't maintain session for same client

I have exposed service as NodePort in Kubernetes and applied session affinity to that service. When the client connects to the service for the first time the session affinity comes into action and maintain the session for a client but after timeoutSeconds when the client connects again with service session affinity doesn’t come into action which leads to inappropriate behaviour. I am using minikube to configure the cluster running on AWS ec2 instance. k8s version is Major:“1”, Minor:“18”. Actually I have exposed service web-svc on node port and client connect to this service through AWS public IP e.g AWS-instance-ip:NodePort. The client first connects on UDP port sending single packet for authorization, once authorized the pod say POD-A insert some iptables rules to allow particular client, after authorization client immediately establish a TCP connection with service but the problem is this request is sent to another pod say POD-B as I have replica=3 and on POD-B, iptables rules are not present for a particular client.

When I delete my Deployment and Service altogether like kubectl delete -f my-deployment.yaml and then create it again through kubectl apply -f my-deployment.yaml i get the expected behaviour both UDP and TCP connection are going to the same pod but after sessionAffinity mentioned timeout when client connect again the UDP and TCP connection doesn’t go to the same pod.

Kube-proxy mode is IPVS

Service Snippet:

    apiVersion: v1
    kind: Service
    metadata:
      name: web-svc
    spec:
      type: NodePort
      selector:
        app: web-app
      externalTrafficPolicy: Local
      sessionAffinity: "ClientIP"
      sessionAffinityConfig:
        clientIP: 
          timeoutSeconds: 240
    ports:
  - port: 30573
    nodePort: 30573
    name: tcp-30573
    protocol: TCP
  - port: 30375
    nodePort: 30375
    name: udp-30375
    protocol: UDP

Hello, zuri_nahk
You need to define externalIPs.

@tej-singh-rana but why this is happening and why can’t i do it with service type NodePort. As mentioned here it shoud’ve worked with any service type and why it is successfully connecting for first time.

Its seems to be working when i change ipvs scheduling algorithm from Round Robin to Source Hashing. Is this the feasible solution?

not working with externalIPs.

Hi
Can you please let me know how you have solved the issue as i’m also facing the same issue.

Its seems to be working when i change ipvs scheduling algorithm from Round Robin to Source Hashing.

Can you please let me know the procedure for doing the above config changes?

@YatinArora
Integration of IPVS in kube-proxy:

  • check ipvs is installed

lsmod | grep -e ip_vs -e nf_conntrack_ipv4

  • if ipvs is not installed then install ipvs package

     sudo apt-get update
     sudo apt-get install ipvsadm
    
  • Edit the configmap

kubectl edit configmap kube-proxy -n kube-system

  • change mode in configmap from " " to ipvs

mode: ipvs

  • change the algorithm from rr to sh

  • Kill any kube-proxy pods

      kubectl get po -n kube-system
      kubectl delete po -n kube-system <pod-name>
    
  • check if kube-proxy pod is restarted

kubectl get po -n kube-system

  • Verify kube-proxy is started with ipvs proxier

  • kubectl logs [kube-proxy pod] -n kube-system | grep “Using ipvs Proxier”