Pod running from the same worker node can not communicate each other via Service

I have a cluster with 2 master nodes, and 3 worker nodes, one of the worker node (named orange) system got crash and reboot, after node reboot, the pods in that nodes have some network issues:

  1. Pods from node orange CAN NOT communicate to each other via Service with ClusterIP type.
  2. Pods from other nodes CAN communicate the pods running in orange node via Service (type = ClusterIP).
  3. Pods from node orange CAN communicate other pods also running in orange node via Pod IP.
  4. Pods from other nodes CAN communicate the pods running in orange node via Service (type = ClusterIP)

I have followed the instruction from Debug Services, but no special finding, the service works fine, endpoints are ok, coredns/flannel pods no special error logs, so does anyone knows how to debug this networking issue?

Cluster information:

Kubernetes version: 1.29.2
Installation method: kubeadm
Host OS: Ubuntu 22.04.5
CNI and version: flannel 1.4.0
CRI and version: containerd 1.7.24

Your points #2 and #4 are the same - is that intentional?

It sounds like kube-proxy is not running on the orange node.

Yes, #4 is duplicated, just forget it.

I have double check that kube-proxy is running in orange node without error.

I have tried the following solutions:

  • restart kube-proxy and kube-flannel pod in orange node, but no luck.
  • evict orange node from cluster and add another new node into the cluster, but the issue also exist in the new node. That’s weird…

You say you followed Debug Services | Kubernetes and “the service works fine, endpoints are ok” but also say “Pods from node orange CAN NOT communicate to each other via Service with ClusterIP type”. Both statements cannot be true.

From the “orange” node (or from orange and another node at the same time), I would be running through that doc to find where it breaks down.

I’m assuming you are trying to connect from pod to service with myservice.svc.cluster.local
Hows your /etc/resolv.conf look like?
Does it have svc.cluster.local in it?

Thanks for your help first

Here is my investigation steps:

Environment

Cluster nodes

root@hs-red:~# k get nodes -o wide
NAME        STATUS   ROLES           AGE    VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
hs-blue     Ready    <none>          264d   v1.29.2   172.22.133.249   <none>        Ubuntu 22.04.5 LTS   5.15.0-92-generic    containerd://1.7.24
hs-green    Ready    control-plane   264d   v1.29.2   172.22.133.247   <none>        Ubuntu 22.04.5 LTS   5.15.0-92-generic    containerd://1.7.24
hs-orange   Ready    <none>          11h    v1.29.2   172.22.133.248   <none>        Ubuntu 22.04.5 LTS   5.15.0-92-generic    containerd://1.7.24
hs-purple   Ready    <none>          12h    v1.29.2   172.22.134.9     <none>        Ubuntu 22.04.5 LTS   5.15.0-122-generic   containerd://1.7.24
hs-red      Ready    control-plane   264d   v1.29.2   172.22.133.246   <none>        Ubuntu 22.04.5 LTS   5.15.0-92-generic    containerd://1.7.24
hs-yellow   Ready    <none>          264d   v1.29.2   172.22.133.250   <none>        Ubuntu 22.04.5 LTS   5.15.0-92-generic    containerd://1.7.24

NOTES:

  • hs-orange was crashed yesterday, I had tried to evict it from the cluster and re-join to the cluster after the networking issue occurred.
  • hs-purple is a new node joined after hs-orange crashed.

Pods in kube-system ns

root@hs-red:~# k get pod -n kube-system -o wide
NAME                               READY   STATUS    RESTARTS   AGE    IP               NODE        NOMINATED NODE   READINESS GATES
coredns-5b6484477b-nhz2d           1/1     Running   0          24h    10.244.1.58      hs-green    <none>           <none>
coredns-5b6484477b-ptfsm           1/1     Running   0          24h    10.244.3.53      hs-blue     <none>           <none>
csi-plugin-4fw7z                   4/4     Running   0          264d   172.22.133.250   hs-yellow   <none>           <none>
csi-plugin-4n4x8                   4/4     Running   0          264d   172.22.133.249   hs-blue     <none>           <none>
csi-plugin-6qkb7                   4/4     Running   0          11h    172.22.133.248   hs-orange   <none>           <none>
csi-plugin-bxj6c                   4/4     Running   0          12h    172.22.134.9     hs-purple   <none>           <none>
csi-plugin-g8c47                   4/4     Running   0          264d   172.22.133.247   hs-green    <none>           <none>
csi-plugin-mvvn8                   4/4     Running   0          264d   172.22.133.246   hs-red      <none>           <none>
csi-provisioner-5964c597d5-4kfvp   9/9     Running   0          33h    10.244.0.35      hs-red      <none>           <none>
csi-provisioner-5964c597d5-dkczr   9/9     Running   0          264d   10.244.1.32      hs-green    <none>           <none>
etcd-hs-green                      1/1     Running   0          264d   172.22.133.247   hs-green    <none>           <none>
etcd-hs-red                        1/1     Running   4          264d   172.22.133.246   hs-red      <none>           <none>
kube-apiserver-hs-green            1/1     Running   4          264d   172.22.133.247   hs-green    <none>           <none>
kube-apiserver-hs-red              1/1     Running   4          264d   172.22.133.246   hs-red      <none>           <none>
kube-controller-manager-hs-green   1/1     Running   2          264d   172.22.133.247   hs-green    <none>           <none>
kube-controller-manager-hs-red     1/1     Running   4          264d   172.22.133.246   hs-red      <none>           <none>
kube-proxy-8mzzh                   1/1     Running   0          12h    172.22.134.9     hs-purple   <none>           <none>
kube-proxy-9rm6j                   1/1     Running   0          20h    172.22.133.249   hs-blue     <none>           <none>
kube-proxy-bv925                   1/1     Running   0          11h    172.22.133.248   hs-orange   <none>           <none>
kube-proxy-jdvjr                   1/1     Running   0          19h    172.22.133.246   hs-red      <none>           <none>
kube-proxy-qh6lf                   1/1     Running   0          20h    172.22.133.250   hs-yellow   <none>           <none>
kube-proxy-s2nhv                   1/1     Running   0          20h    172.22.133.247   hs-green    <none>           <none>
kube-scheduler-hs-green            1/1     Running   2          264d   172.22.133.247   hs-green    <none>           <none>
kube-scheduler-hs-red              1/1     Running   4          264d   172.22.133.246   hs-red      <none>           <none>
metrics-server-5f5fc55fd-n25qs     1/1     Running   0          264d   10.244.3.13      hs-blue     <none>           <none>

Pods in kube-flannel ns

root@hs-red:~# k get pod -n kube-flannel -o wide
NAME                    READY   STATUS    RESTARTS   AGE    IP               NODE        NOMINATED NODE   READINESS GATES
kube-flannel-ds-4c7pr   1/1     Running   0          264d   172.22.133.249   hs-blue     <none>           <none>
kube-flannel-ds-4x6rg   1/1     Running   0          264d   172.22.133.246   hs-red      <none>           <none>
kube-flannel-ds-5j974   1/1     Running   0          11h    172.22.133.248   hs-orange   <none>           <none>
kube-flannel-ds-84rws   1/1     Running   0          264d   172.22.133.250   hs-yellow   <none>           <none>
kube-flannel-ds-jlnv9   1/1     Running   0          264d   172.22.133.247   hs-green    <none>           <none>
kube-flannel-ds-plsfm   1/1     Running   0          12h    172.22.134.9     hs-purple   <none>           <none>

Application pods

I have created 3 network debug pods in diff nodes (hs-orange, hs-purple, hs-blue)

apiVersion: v1
kind: Pod
metadata:
  name: nettools-purple
spec:
  nodeName: hs-purple
  containers:
  - name: nettools
    image: enix223/alpine-go:3.19.0
    command:
      - sleep
      - "infinity"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
---
apiVersion: v1
kind: Pod
metadata:
  name: nettools-orange
spec:
  nodeName: hs-orange
  containers:
  - name: nettools
    image: enix223/alpine-go:3.19.0
    command:
      - sleep
      - "infinity"
    imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Pod
metadata:
  name: nettools-blue
spec:
  nodeName: hs-blue
  containers:
  - name: nettools
    image: enix223/alpine-go:3.19.0
    command:
      - sleep
      - "infinity"
    imagePullPolicy: IfNotPresent
ishare@hs-red:/var/app/ishare$ k get po -o wide
NAME                                     READY   STATUS    RESTARTS      AGE    IP            NODE        NOMINATED NODE   READINESS GATES
ishare-authentication-7d4bff5bff-44qkf   1/1     Running   7 (12h ago)   34h    10.244.3.48   hs-blue     <none>           <none>
ishare-authentication-7d4bff5bff-zdc7s   1/1     Running   3 (12h ago)   34h    10.244.1.57   hs-green    <none>           <none>
ishare-device-7d46c4dc69-8mjbj           1/1     Running   0             34h    10.244.0.33   hs-red      <none>           <none>
ishare-device-7d46c4dc69-s5cb2           1/1     Running   0             34h    10.244.4.56   hs-yellow   <none>           <none>
ishare-generic-654f749d8-q8pbm           1/1     Running   0             33h    10.244.3.52   hs-blue     <none>           <none>
ishare-payment-6fcdd78f6c-tf42q          1/1     Running   0             11h    10.244.5.23   hs-purple   <none>           <none>
ishare-report-77db86fc99-scfxh           1/1     Running   0             33h    10.244.4.58   hs-yellow   <none>           <none>
ishare-service-desk-7c7f46d4f6-pw6k9     1/1     Running   0             11h    10.244.5.21   hs-purple   <none>           <none>
mysql-primary-0                          1/1     Running   0             11h    10.244.8.11   hs-orange   <none>           <none>
mysql-secondary-0                        1/1     Running   0             263d   10.244.4.27   hs-yellow   <none>           <none>
nettools-blue                            1/1     Running   0             49s    10.244.3.55   hs-blue     <none>           <none>
nettools-orange                          1/1     Running   0             49s    10.244.8.13   hs-orange   <none>           <none>
nettools-purple                          1/1     Running   0             49s    10.244.5.28   hs-purple   <none>           <none>
rabbitmq-0                               1/1     Running   0             264d   10.244.3.16   hs-blue     <none>           <none>
rabbitmq-1                               1/1     Running   0             263d   10.244.4.30   hs-yellow   <none>           <none>
redis-master-0                           1/1     Running   0             11h    10.244.8.10   hs-orange   <none>           <none>
redis-replicas-0                         1/1     Running   0             11h    10.244.8.9    hs-orange   <none>           <none>
redis-replicas-1                         1/1     Running   3 (11h ago)   12h    10.244.3.54   hs-blue     <none>           <none>
tdengine-0                               1/1     Running   0             264d   10.244.3.19   hs-blue     <none>           <none>
tdengine-1                               1/1     Running   0             11h    10.244.8.8    hs-orange   <none>           <none>
tdengine-2                               1/1     Running   0             263d   10.244.4.26   hs-yellow   <none>           <none>

Service

ishare@hs-red:/var/app/ishare$ k get svc -o wide
NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                 AGE    SELECTOR
ishare-authentication      ClusterIP   10.97.110.218    <none>        8001/TCP                                263d   app=ishare-authentication
ishare-device              ClusterIP   10.102.197.192   <none>        8004/TCP                                263d   app=ishare-device
ishare-generic             ClusterIP   10.104.151.34    <none>        8005/TCP                                263d   app=ishare-generic
ishare-payment             ClusterIP   10.101.228.233   <none>        8009/TCP                                33h    app=ishare-payment
ishare-report              ClusterIP   10.98.6.208      <none>        8008/TCP                                263d   app=ishare-report
ishare-service-desk        ClusterIP   10.96.152.98     <none>        8006/TCP                                263d   app=ishare-service-desk
mysql-primary              ClusterIP   10.103.20.68     <none>        3306/TCP                                32h    app.kubernetes.io/component=primary,app.kubernetes.io/instance=mysql,app.kubernetes.io/name=mysql
mysql-primary-headless     ClusterIP   None             <none>        3306/TCP                                264d   app.kubernetes.io/component=primary,app.kubernetes.io/instance=mysql,app.kubernetes.io/name=mysql
mysql-secondary            ClusterIP   10.109.49.125    <none>        3306/TCP                                264d   app.kubernetes.io/component=secondary,app.kubernetes.io/instance=mysql,app.kubernetes.io/name=mysql
mysql-secondary-headless   ClusterIP   None             <none>        3306/TCP                                264d   app.kubernetes.io/component=secondary,app.kubernetes.io/instance=mysql,app.kubernetes.io/name=mysql
rabbitmq                   ClusterIP   10.99.215.167    <none>        5672/TCP,4369/TCP,25672/TCP,15672/TCP   264d   app.kubernetes.io/instance=rabbitmq,app.kubernetes.io/name=rabbitmq
rabbitmq-headless          ClusterIP   None             <none>        4369/TCP,5672/TCP,25672/TCP,15672/TCP   264d   app.kubernetes.io/instance=rabbitmq,app.kubernetes.io/name=rabbitmq
redis-headless             ClusterIP   None             <none>        6379/TCP                                264d   app.kubernetes.io/instance=redis,app.kubernetes.io/name=redis
redis-master               ClusterIP   10.99.59.22      <none>        6379/TCP                                32h    app.kubernetes.io/component=master,app.kubernetes.io/instance=redis,app.kubernetes.io/name=redis
redis-replicas             ClusterIP   10.97.167.148    <none>        6379/TCP                                264d   app.kubernetes.io/component=replica,app.kubernetes.io/instance=redis,app.kubernetes.io/name=redis
tdengine                   ClusterIP   10.100.144.190   <none>        6030/TCP,6041/TCP                       264d   app=tdengine

Service endpoints

ishare@hs-red:/var/app/ishare$ k get ep -o wide
NAME                       ENDPOINTS                                                         AGE
ishare-authentication      10.244.1.57:8001,10.244.3.48:8001                                 263d
ishare-device              10.244.0.33:8004,10.244.4.56:8004                                 263d
ishare-generic             10.244.3.52:8005                                                  263d
ishare-payment             10.244.5.23:8009                                                  33h
ishare-report              10.244.4.58:8008                                                  263d
ishare-service-desk        10.244.5.21:8006                                                  263d
mysql-primary              10.244.8.11:3306                                                  32h
mysql-primary-headless     10.244.8.11:3306                                                  264d
mysql-secondary            10.244.4.27:3306                                                  264d
mysql-secondary-headless   10.244.4.27:3306                                                  264d
rabbitmq                   10.244.3.16:5672,10.244.4.30:5672,10.244.3.16:15672 + 5 more...   264d
rabbitmq-headless          10.244.3.16:5672,10.244.4.30:5672,10.244.3.16:15672 + 5 more...   264d
redis-headless             10.244.3.54:6379,10.244.8.10:6379,10.244.8.9:6379                 264d
redis-master               10.244.8.10:6379                                                  32h
redis-replicas             10.244.3.54:6379,10.244.8.9:6379                                  264d
tdengine                   10.244.3.19:6041,10.244.4.26:6041,10.244.8.8:6041 + 3 more...     264d

Test pod communication

Test preparation

  • Both nettools-purple and ishare-payment-6fcdd78f6c-tf42q pods running in hs-purple node

  • nettools-blue run in hs-blue node, and ishare-payment-6fcdd78f6c-tf42q run in hs-purple node

  • DNS Setup in debug pods:

    ishare@hs-red:/var/app/debug$ k exec -it nettools-purple -- bash -c 'cat /etc/resolv.conf'
    search ishare.svc.cluster.local svc.cluster.local cluster.local
    nameserver 10.96.0.10
    options ndots:5
    ishare@hs-red:/var/app/debug$ k exec -it nettools-orange -- bash -c 'cat /etc/resolv.conf'
    search ishare.svc.cluster.local svc.cluster.local cluster.local
    nameserver 10.96.0.10
    options ndots:5
    ishare@hs-red:/var/app/debug$ k exec -it nettools-blue -- bash -c 'cat /etc/resolv.conf'
    search ishare.svc.cluster.local svc.cluster.local cluster.local
    nameserver 10.96.0.10
    options ndots:5
    
  • Service setup

    ishare@hs-red:/var/app/debug$ k get svc ishare-payment -o yaml
    apiVersion: v1
    kind: Service
    metadata:
    annotations:
        kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"ishare-payment","k8s-app":"ishare","scope":"backend"},"name":"ishare-payment","namespace":"ishare"},"spec":{"ports":[{"name":"http-port","port":8009,"protocol":"TCP"}],"selector":{"app":"ishare-payment"}}}
    creationTimestamp: "2024-12-10T16:58:48Z"
    labels:
        app: ishare-payment
        k8s-app: ishare
        scope: backend
    name: ishare-payment
    namespace: ishare
    resourceVersion: "82655844"
    uid: e599b17e-2aeb-48cc-9e46-5ea469e06b08
    spec:
    clusterIP: 10.101.228.233
    clusterIPs:
    - 10.101.228.233
    internalTrafficPolicy: Cluster
    ipFamilies:
    - IPv4
    ipFamilyPolicy: SingleStack
    ports:
    - name: http-port
        port: 8009
        protocol: TCP
        targetPort: 8009
    selector:
        app: ishare-payment
    sessionAffinity: None
    type: ClusterIP
    status:
    loadBalancer: {}
    
  • kube-proxy logs from hs-purple (using iptables mode)

    root@hs-red:~# k logs -n kube-system kube-proxy-8mzzh 
    I1211 13:49:53.699115       1 server_others.go:72] "Using iptables proxy"
    I1211 13:49:53.725967       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.22.134.9"]
    I1211 13:49:53.727544       1 conntrack.go:118] "Set sysctl" entry="net/netfilter/nf_conntrack_max" value=131072
    I1211 13:49:53.727578       1 conntrack.go:58] "Setting nf_conntrack_max" nfConntrackMax=131072
    I1211 13:49:53.727645       1 conntrack.go:118] "Set sysctl" entry="net/netfilter/nf_conntrack_tcp_timeout_established" value=86400
    I1211 13:49:53.727696       1 conntrack.go:118] "Set sysctl" entry="net/netfilter/nf_conntrack_tcp_timeout_close_wait" value=3600
    I1211 13:49:53.742707       1 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
    I1211 13:49:53.742736       1 server_others.go:168] "Using iptables Proxier"
    I1211 13:49:53.744615       1 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
    I1211 13:49:53.744629       1 server_others.go:529] "Defaulting to no-op detect-local"
    I1211 13:49:53.744657       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
    I1211 13:49:53.744876       1 server.go:865] "Version info" version="v1.29.2"
    I1211 13:49:53.744890       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
    I1211 13:49:53.745808       1 config.go:188] "Starting service config controller"
    I1211 13:49:53.746573       1 shared_informer.go:311] Waiting for caches to sync for service config
    I1211 13:49:53.746218       1 config.go:315] "Starting node config controller"
    I1211 13:49:53.746780       1 shared_informer.go:311] Waiting for caches to sync for node config
    I1211 13:49:53.746256       1 config.go:97] "Starting endpoint slice config controller"
    I1211 13:49:53.746797       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
    I1211 13:49:53.847522       1 shared_informer.go:318] Caches are synced for endpoint slice config
    I1211 13:49:53.847529       1 shared_informer.go:318] Caches are synced for service config
    I1211 13:49:53.847538       1 shared_informer.go:318] Caches are synced for node config
    
  • iptables-save output from hs-purple

    root@hs-purple:~# iptables-save 
    # Generated by iptables-save v1.8.7 on Thu Dec 12 10:59:03 2024
    *mangle
    :PREROUTING ACCEPT [0:0]
    :INPUT ACCEPT [0:0]
    :FORWARD ACCEPT [0:0]
    :OUTPUT ACCEPT [0:0]
    :POSTROUTING ACCEPT [0:0]
    :KUBE-IPTABLES-HINT - [0:0]
    :KUBE-KUBELET-CANARY - [0:0]
    :KUBE-PROXY-CANARY - [0:0]
    COMMIT
    # Completed on Thu Dec 12 10:59:03 2024
    # Generated by iptables-save v1.8.7 on Thu Dec 12 10:59:03 2024
    *filter
    :INPUT ACCEPT [0:0]
    :FORWARD DROP [0:0]
    :OUTPUT ACCEPT [0:0]
    :DOCKER - [0:0]
    :DOCKER-ISOLATION-STAGE-1 - [0:0]
    :DOCKER-ISOLATION-STAGE-2 - [0:0]
    :DOCKER-USER - [0:0]
    :FLANNEL-FWD - [0:0]
    :KUBE-EXTERNAL-SERVICES - [0:0]
    :KUBE-FIREWALL - [0:0]
    :KUBE-FORWARD - [0:0]
    :KUBE-KUBELET-CANARY - [0:0]
    :KUBE-NODEPORTS - [0:0]
    :KUBE-PROXY-CANARY - [0:0]
    :KUBE-PROXY-FIREWALL - [0:0]
    :KUBE-SERVICES - [0:0]
    -A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
    -A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS
    -A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
    -A INPUT -j KUBE-FIREWALL
    -A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
    -A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
    -A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
    -A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
    -A FORWARD -j DOCKER-USER
    -A FORWARD -j DOCKER-ISOLATION-STAGE-1
    -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
    -A FORWARD -o docker0 -j DOCKER
    -A FORWARD -i docker0 ! -o docker0 -j ACCEPT
    -A FORWARD -i docker0 -o docker0 -j ACCEPT
    -A FORWARD -o br-e2d8afa1ea4c -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
    -A FORWARD -o br-e2d8afa1ea4c -j DOCKER
    -A FORWARD -i br-e2d8afa1ea4c ! -o br-e2d8afa1ea4c -j ACCEPT
    -A FORWARD -i br-e2d8afa1ea4c -o br-e2d8afa1ea4c -j ACCEPT
    -A FORWARD -m comment --comment "flanneld forward" -j FLANNEL-FWD
    -A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
    -A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
    -A OUTPUT -j KUBE-FIREWALL
    -A DOCKER -d 172.18.0.2/32 ! -i br-e2d8afa1ea4c -o br-e2d8afa1ea4c -p tcp -m tcp --dport 8000 -j ACCEPT
    -A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
    -A DOCKER-ISOLATION-STAGE-1 -i br-e2d8afa1ea4c ! -o br-e2d8afa1ea4c -j DOCKER-ISOLATION-STAGE-2
    -A DOCKER-ISOLATION-STAGE-1 -j RETURN
    -A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
    -A DOCKER-ISOLATION-STAGE-2 -o br-e2d8afa1ea4c -j DROP
    -A DOCKER-ISOLATION-STAGE-2 -j RETURN
    -A DOCKER-USER -j RETURN
    -A FLANNEL-FWD -s 10.244.0.0/16 -m comment --comment "flanneld forward" -j ACCEPT
    -A FLANNEL-FWD -d 10.244.0.0/16 -m comment --comment "flanneld forward" -j ACCEPT
    -A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
    -A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
    -A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -j ACCEPT
    -A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
    COMMIT
    # Completed on Thu Dec 12 10:59:03 2024
    # Generated by iptables-save v1.8.7 on Thu Dec 12 10:59:03 2024
    *nat
    :PREROUTING ACCEPT [0:0]
    :INPUT ACCEPT [0:0]
    :OUTPUT ACCEPT [0:0]
    :POSTROUTING ACCEPT [0:0]
    :DOCKER - [0:0]
    :FLANNEL-POSTRTG - [0:0]
    :KUBE-EXT-CG5I4G2RS3ZVWGLK - [0:0]
    :KUBE-EXT-EDNDUDH2C75GIR6O - [0:0]
    :KUBE-KUBELET-CANARY - [0:0]
    :KUBE-MARK-MASQ - [0:0]
    :KUBE-NODEPORTS - [0:0]
    :KUBE-POSTROUTING - [0:0]
    :KUBE-PROXY-CANARY - [0:0]
    :KUBE-SEP-23QMUZO34QB543PV - [0:0]
    :KUBE-SEP-2AKMKIN6UXCGPSIT - [0:0]
    :KUBE-SEP-2ETQ6YRBFHOP6RWH - [0:0]
    :KUBE-SEP-3NJX7UWK2XR6D7T2 - [0:0]
    :KUBE-SEP-3SQXK76CZKJH6HMR - [0:0]
    :KUBE-SEP-4GRPFVPOMUQIZBG7 - [0:0]
    :KUBE-SEP-52YKHRQ5LIBEU4SK - [0:0]
    :KUBE-SEP-53IF7WEBFRCU3XBV - [0:0]
    :KUBE-SEP-5M7IZ6GNSLNN4UYI - [0:0]
    :KUBE-SEP-6M2BJLQLMJYJH3IL - [0:0]
    :KUBE-SEP-A3PKFORORSGO3X6P - [0:0]
    :KUBE-SEP-ABJFV2KETYMQW2T3 - [0:0]
    :KUBE-SEP-AF2IEXFIFGILDOMA - [0:0]
    :KUBE-SEP-BYZQZ474W2WKSJWB - [0:0]
    :KUBE-SEP-CKVE2CLCR6PMYQRS - [0:0]
    :KUBE-SEP-DBIHFG6JQWPMNRPV - [0:0]
    :KUBE-SEP-FWVUFG74L33JGL4N - [0:0]
    :KUBE-SEP-GNJ6UONPSHKX42CQ - [0:0]
    :KUBE-SEP-GYP2J5ZBCTPVZLQU - [0:0]
    :KUBE-SEP-GYPSBZU5PWL5Q4SA - [0:0]
    :KUBE-SEP-H6AHBZAN5GUYQQH4 - [0:0]
    :KUBE-SEP-I2CRNUBK3EIYJKBR - [0:0]
    :KUBE-SEP-J4F57GIY2L4IFBBC - [0:0]
    :KUBE-SEP-LJYPPBUDL34Y5M64 - [0:0]
    :KUBE-SEP-M25LFUXNV4BGLBYM - [0:0]
    :KUBE-SEP-MDKSB4XXNKCY6ZDU - [0:0]
    :KUBE-SEP-MXRZBQNCU7KNR4TO - [0:0]
    :KUBE-SEP-NNT5TGOFWGEP6HG2 - [0:0]
    :KUBE-SEP-OQ77EA3T2F2AQDCR - [0:0]
    :KUBE-SEP-OZE5UHYY2DV5MV3S - [0:0]
    :KUBE-SEP-PNFALMF2KZLYSQR2 - [0:0]
    :KUBE-SEP-RIASCGFSVB4YD4L6 - [0:0]
    :KUBE-SEP-RR5JKKVQXOPBH7TR - [0:0]
    :KUBE-SEP-SG3ESIDSSG2KUW7I - [0:0]
    :KUBE-SEP-TUVO3SSFGJGVLTOK - [0:0]
    :KUBE-SEP-UXA2BORWULZ3QPTK - [0:0]
    :KUBE-SEP-V2AMTJ7AIOJFBJWI - [0:0]
    :KUBE-SEP-VXEL2XCQPAU37WDR - [0:0]
    :KUBE-SEP-W5JAMJZJI7HJMYT7 - [0:0]
    :KUBE-SEP-XAT6W57LDBZGXZHC - [0:0]
    :KUBE-SEP-XW5C57KTUJ7ZU2YX - [0:0]
    :KUBE-SEP-Z4YCKRJHHWCLOMZQ - [0:0]
    :KUBE-SERVICES - [0:0]
    :KUBE-SVC-5AIGOQAWHQWJGK3F - [0:0]
    :KUBE-SVC-72ZXZK3WL2NO2DIF - [0:0]
    :KUBE-SVC-7EJ2UJO7JY5Y4RI7 - [0:0]
    :KUBE-SVC-A3U4QUBRTPAXX6KF - [0:0]
    :KUBE-SVC-BNQW7IJZ3AABAZQM - [0:0]
    :KUBE-SVC-CG5I4G2RS3ZVWGLK - [0:0]
    :KUBE-SVC-DLLTBCJO5P4IMZYT - [0:0]
    :KUBE-SVC-E6DCLSPEF4PFLSHX - [0:0]
    :KUBE-SVC-EDNDUDH2C75GIR6O - [0:0]
    :KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
    :KUBE-SVC-EZYNCFY2F7N6OQA2 - [0:0]
    :KUBE-SVC-FVKSXUFSB7ZOP2CX - [0:0]
    :KUBE-SVC-HQF4EG425QBR63QW - [0:0]
    :KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
    :KUBE-SVC-KHB3CZCDQCMYK6SK - [0:0]
    :KUBE-SVC-KOTPX4FFBKJ3EAPA - [0:0]
    :KUBE-SVC-NMX4ZDKZALE5KEVE - [0:0]
    :KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
    :KUBE-SVC-P6EDUNCH36MVSA7G - [0:0]
    :KUBE-SVC-SWIWJ3IRZBO72NNS - [0:0]
    :KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
    :KUBE-SVC-XGTLGVHTHRGIX6Y3 - [0:0]
    :KUBE-SVC-Y2JLVXWGMJNEWTZD - [0:0]
    :KUBE-SVC-Z4ANX4WAEWEBLCTM - [0:0]
    -A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
    -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
    -A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
    -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
    -A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
    -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
    -A POSTROUTING -s 172.18.0.0/16 ! -o br-e2d8afa1ea4c -j MASQUERADE
    -A POSTROUTING -s 172.18.0.2/32 -d 172.18.0.2/32 -p tcp -m tcp --dport 8000 -j MASQUERADE
    -A POSTROUTING -m comment --comment "flanneld masq" -j FLANNEL-POSTRTG
    -A DOCKER -i docker0 -j RETURN
    -A DOCKER -i br-e2d8afa1ea4c -j RETURN
    -A DOCKER ! -i br-e2d8afa1ea4c -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.18.0.2:8000
    -A FLANNEL-POSTRTG -m comment --comment "flanneld masq" -j RETURN
    -A FLANNEL-POSTRTG -s 10.244.5.0/24 -d 10.244.0.0/16 -m comment --comment "flanneld masq" -j RETURN
    -A FLANNEL-POSTRTG -s 10.244.0.0/16 -d 10.244.5.0/24 -m comment --comment "flanneld masq" -j RETURN
    -A FLANNEL-POSTRTG ! -s 10.244.0.0/16 -d 10.244.5.0/24 -m comment --comment "flanneld masq" -j RETURN
    -A FLANNEL-POSTRTG -s 10.244.0.0/16 ! -d 224.0.0.0/4 -m comment --comment "flanneld masq" -j MASQUERADE --random-fully
    -A FLANNEL-POSTRTG ! -s 10.244.0.0/16 -d 10.244.0.0/16 -m comment --comment "flanneld masq" -j MASQUERADE --random-fully
    -A KUBE-EXT-CG5I4G2RS3ZVWGLK -m comment --comment "masquerade traffic for ingress-nginx/ingress-nginx-controller:http external destinations" -j KUBE-MARK-MASQ
    -A KUBE-EXT-CG5I4G2RS3ZVWGLK -j KUBE-SVC-CG5I4G2RS3ZVWGLK
    -A KUBE-EXT-EDNDUDH2C75GIR6O -m comment --comment "masquerade traffic for ingress-nginx/ingress-nginx-controller:https external destinations" -j KUBE-MARK-MASQ
    -A KUBE-EXT-EDNDUDH2C75GIR6O -j KUBE-SVC-EDNDUDH2C75GIR6O
    -A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
    -A KUBE-NODEPORTS -p tcp -m comment --comment "ingress-nginx/ingress-nginx-controller:http" -j KUBE-EXT-CG5I4G2RS3ZVWGLK
    -A KUBE-NODEPORTS -p tcp -m comment --comment "ingress-nginx/ingress-nginx-controller:https" -j KUBE-EXT-EDNDUDH2C75GIR6O
    -A KUBE-POSTROUTING -j RETURN
    -A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
    -A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
    -A KUBE-SEP-23QMUZO34QB543PV -s 10.244.4.27/32 -m comment --comment "ishare/mysql-secondary:mysql" -j KUBE-MARK-MASQ
    -A KUBE-SEP-23QMUZO34QB543PV -p tcp -m comment --comment "ishare/mysql-secondary:mysql" -m tcp -j DNAT --to-destination 10.244.4.27:3306
    -A KUBE-SEP-2AKMKIN6UXCGPSIT -s 10.244.8.8/32 -m comment --comment "ishare/tdengine:tcp6030" -j KUBE-MARK-MASQ
    -A KUBE-SEP-2AKMKIN6UXCGPSIT -p tcp -m comment --comment "ishare/tdengine:tcp6030" -m tcp -j DNAT --to-destination 10.244.8.8:6030
    -A KUBE-SEP-2ETQ6YRBFHOP6RWH -s 10.244.4.58/32 -m comment --comment "ishare/ishare-report:http-port" -j KUBE-MARK-MASQ
    -A KUBE-SEP-2ETQ6YRBFHOP6RWH -p tcp -m comment --comment "ishare/ishare-report:http-port" -m tcp -j DNAT --to-destination 10.244.4.58:8008
    -A KUBE-SEP-3NJX7UWK2XR6D7T2 -s 10.244.0.12/32 -m comment --comment "ingress-nginx/ingress-nginx-controller-admission:https-webhook" -j KUBE-MARK-MASQ
    -A KUBE-SEP-3NJX7UWK2XR6D7T2 -p tcp -m comment --comment "ingress-nginx/ingress-nginx-controller-admission:https-webhook" -m tcp -j DNAT --to-destination 10.244.0.12:8443
    -A KUBE-SEP-3SQXK76CZKJH6HMR -s 10.244.3.50/32 -m comment --comment "ingress-nginx/ingress-nginx-controller-admission:https-webhook" -j KUBE-MARK-MASQ
    -A KUBE-SEP-3SQXK76CZKJH6HMR -p tcp -m comment --comment "ingress-nginx/ingress-nginx-controller-admission:https-webhook" -m tcp -j DNAT --to-destination 10.244.3.50:8443
    -A KUBE-SEP-4GRPFVPOMUQIZBG7 -s 10.244.0.33/32 -m comment --comment "ishare/ishare-device:http-port" -j KUBE-MARK-MASQ
    -A KUBE-SEP-4GRPFVPOMUQIZBG7 -p tcp -m comment --comment "ishare/ishare-device:http-port" -m tcp -j DNAT --to-destination 10.244.0.33:8004
    -A KUBE-SEP-52YKHRQ5LIBEU4SK -s 10.244.4.26/32 -m comment --comment "ishare/tdengine:tcp6030" -j KUBE-MARK-MASQ
    -A KUBE-SEP-52YKHRQ5LIBEU4SK -p tcp -m comment --comment "ishare/tdengine:tcp6030" -m tcp -j DNAT --to-destination 10.244.4.26:6030
    -A KUBE-SEP-53IF7WEBFRCU3XBV -s 10.244.8.9/32 -m comment --comment "ishare/redis-replicas:tcp-redis" -j KUBE-MARK-MASQ
    -A KUBE-SEP-53IF7WEBFRCU3XBV -p tcp -m comment --comment "ishare/redis-replicas:tcp-redis" -m tcp -j DNAT --to-destination 10.244.8.9:6379
    -A KUBE-SEP-5M7IZ6GNSLNN4UYI -s 10.244.5.23/32 -m comment --comment "ishare/ishare-payment:http-port" -j KUBE-MARK-MASQ
    -A KUBE-SEP-5M7IZ6GNSLNN4UYI -p tcp -m comment --comment "ishare/ishare-payment:http-port" -m tcp -j DNAT --to-destination 10.244.5.23:8009
    -A KUBE-SEP-6M2BJLQLMJYJH3IL -s 10.244.4.30/32 -m comment --comment "ishare/rabbitmq:dist" -j KUBE-MARK-MASQ
    -A KUBE-SEP-6M2BJLQLMJYJH3IL -p tcp -m comment --comment "ishare/rabbitmq:dist" -m tcp -j DNAT --to-destination 10.244.4.30:25672
    -A KUBE-SEP-A3PKFORORSGO3X6P -s 10.244.0.12/32 -m comment --comment "ingress-nginx/ingress-nginx-controller:http" -j KUBE-MARK-MASQ
    -A KUBE-SEP-A3PKFORORSGO3X6P -p tcp -m comment --comment "ingress-nginx/ingress-nginx-controller:http" -m tcp -j DNAT --to-destination 10.244.0.12:80
    -A KUBE-SEP-ABJFV2KETYMQW2T3 -s 10.244.3.13/32 -m comment --comment "kube-system/metrics-server:https" -j KUBE-MARK-MASQ
    -A KUBE-SEP-ABJFV2KETYMQW2T3 -p tcp -m comment --comment "kube-system/metrics-server:https" -m tcp -j DNAT --to-destination 10.244.3.13:10250
    -A KUBE-SEP-AF2IEXFIFGILDOMA -s 172.22.133.246/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
    -A KUBE-SEP-AF2IEXFIFGILDOMA -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 172.22.133.246:6443
    -A KUBE-SEP-BYZQZ474W2WKSJWB -s 10.244.4.30/32 -m comment --comment "ishare/rabbitmq:epmd" -j KUBE-MARK-MASQ
    -A KUBE-SEP-BYZQZ474W2WKSJWB -p tcp -m comment --comment "ishare/rabbitmq:epmd" -m tcp -j DNAT --to-destination 10.244.4.30:4369
    -A KUBE-SEP-CKVE2CLCR6PMYQRS -s 10.244.3.52/32 -m comment --comment "ishare/ishare-generic:http-port" -j KUBE-MARK-MASQ
    -A KUBE-SEP-CKVE2CLCR6PMYQRS -p tcp -m comment --comment "ishare/ishare-generic:http-port" -m tcp -j DNAT --to-destination 10.244.3.52:8005
    -A KUBE-SEP-DBIHFG6JQWPMNRPV -s 10.244.3.50/32 -m comment --comment "ingress-nginx/ingress-nginx-controller:https" -j KUBE-MARK-MASQ
    -A KUBE-SEP-DBIHFG6JQWPMNRPV -p tcp -m comment --comment "ingress-nginx/ingress-nginx-controller:https" -m tcp -j DNAT --to-destination 10.244.3.50:443
    -A KUBE-SEP-FWVUFG74L33JGL4N -s 172.22.133.247/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
    -A KUBE-SEP-FWVUFG74L33JGL4N -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 172.22.133.247:6443
    -A KUBE-SEP-GNJ6UONPSHKX42CQ -s 10.244.5.21/32 -m comment --comment "ishare/ishare-service-desk:http-port" -j KUBE-MARK-MASQ
    -A KUBE-SEP-GNJ6UONPSHKX42CQ -p tcp -m comment --comment "ishare/ishare-service-desk:http-port" -m tcp -j DNAT --to-destination 10.244.5.21:8006
    -A KUBE-SEP-GYP2J5ZBCTPVZLQU -s 10.244.3.16/32 -m comment --comment "ishare/rabbitmq:http-stats" -j KUBE-MARK-MASQ
    -A KUBE-SEP-GYP2J5ZBCTPVZLQU -p tcp -m comment --comment "ishare/rabbitmq:http-stats" -m tcp -j DNAT --to-destination 10.244.3.16:15672
    -A KUBE-SEP-GYPSBZU5PWL5Q4SA -s 10.244.1.58/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
    -A KUBE-SEP-GYPSBZU5PWL5Q4SA -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.1.58:53
    -A KUBE-SEP-H6AHBZAN5GUYQQH4 -s 10.244.8.8/32 -m comment --comment "ishare/tdengine:tcp6041" -j KUBE-MARK-MASQ
    -A KUBE-SEP-H6AHBZAN5GUYQQH4 -p tcp -m comment --comment "ishare/tdengine:tcp6041" -m tcp -j DNAT --to-destination 10.244.8.8:6041
    -A KUBE-SEP-I2CRNUBK3EIYJKBR -s 10.244.3.16/32 -m comment --comment "ishare/rabbitmq:epmd" -j KUBE-MARK-MASQ
    -A KUBE-SEP-I2CRNUBK3EIYJKBR -p tcp -m comment --comment "ishare/rabbitmq:epmd" -m tcp -j DNAT --to-destination 10.244.3.16:4369
    -A KUBE-SEP-J4F57GIY2L4IFBBC -s 10.244.3.48/32 -m comment --comment "ishare/ishare-authentication:http-port" -j KUBE-MARK-MASQ
    -A KUBE-SEP-J4F57GIY2L4IFBBC -p tcp -m comment --comment "ishare/ishare-authentication:http-port" -m tcp -j DNAT --to-destination 10.244.3.48:8001
    -A KUBE-SEP-LJYPPBUDL34Y5M64 -s 10.244.3.54/32 -m comment --comment "ishare/redis-replicas:tcp-redis" -j KUBE-MARK-MASQ
    -A KUBE-SEP-LJYPPBUDL34Y5M64 -p tcp -m comment --comment "ishare/redis-replicas:tcp-redis" -m tcp -j DNAT --to-destination 10.244.3.54:6379
    -A KUBE-SEP-M25LFUXNV4BGLBYM -s 10.244.3.16/32 -m comment --comment "ishare/rabbitmq:amqp" -j KUBE-MARK-MASQ
    -A KUBE-SEP-M25LFUXNV4BGLBYM -p tcp -m comment --comment "ishare/rabbitmq:amqp" -m tcp -j DNAT --to-destination 10.244.3.16:5672
    -A KUBE-SEP-MDKSB4XXNKCY6ZDU -s 10.244.1.58/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
    -A KUBE-SEP-MDKSB4XXNKCY6ZDU -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.244.1.58:9153
    -A KUBE-SEP-MXRZBQNCU7KNR4TO -s 10.244.3.19/32 -m comment --comment "ishare/tdengine:tcp6041" -j KUBE-MARK-MASQ
    -A KUBE-SEP-MXRZBQNCU7KNR4TO -p tcp -m comment --comment "ishare/tdengine:tcp6041" -m tcp -j DNAT --to-destination 10.244.3.19:6041
    -A KUBE-SEP-NNT5TGOFWGEP6HG2 -s 10.244.4.30/32 -m comment --comment "ishare/rabbitmq:amqp" -j KUBE-MARK-MASQ
    -A KUBE-SEP-NNT5TGOFWGEP6HG2 -p tcp -m comment --comment "ishare/rabbitmq:amqp" -m tcp -j DNAT --to-destination 10.244.4.30:5672
    -A KUBE-SEP-OQ77EA3T2F2AQDCR -s 10.244.3.53/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
    -A KUBE-SEP-OQ77EA3T2F2AQDCR -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.3.53:53
    -A KUBE-SEP-OZE5UHYY2DV5MV3S -s 10.244.8.11/32 -m comment --comment "ishare/mysql-primary:mysql" -j KUBE-MARK-MASQ
    -A KUBE-SEP-OZE5UHYY2DV5MV3S -p tcp -m comment --comment "ishare/mysql-primary:mysql" -m tcp -j DNAT --to-destination 10.244.8.11:3306
    -A KUBE-SEP-PNFALMF2KZLYSQR2 -s 10.244.4.56/32 -m comment --comment "ishare/ishare-device:http-port" -j KUBE-MARK-MASQ
    -A KUBE-SEP-PNFALMF2KZLYSQR2 -p tcp -m comment --comment "ishare/ishare-device:http-port" -m tcp -j DNAT --to-destination 10.244.4.56:8004
    -A KUBE-SEP-RIASCGFSVB4YD4L6 -s 10.244.3.16/32 -m comment --comment "ishare/rabbitmq:dist" -j KUBE-MARK-MASQ
    -A KUBE-SEP-RIASCGFSVB4YD4L6 -p tcp -m comment --comment "ishare/rabbitmq:dist" -m tcp -j DNAT --to-destination 10.244.3.16:25672
    -A KUBE-SEP-RR5JKKVQXOPBH7TR -s 10.244.3.50/32 -m comment --comment "ingress-nginx/ingress-nginx-controller:http" -j KUBE-MARK-MASQ
    -A KUBE-SEP-RR5JKKVQXOPBH7TR -p tcp -m comment --comment "ingress-nginx/ingress-nginx-controller:http" -m tcp -j DNAT --to-destination 10.244.3.50:80
    -A KUBE-SEP-SG3ESIDSSG2KUW7I -s 10.244.1.57/32 -m comment --comment "ishare/ishare-authentication:http-port" -j KUBE-MARK-MASQ
    -A KUBE-SEP-SG3ESIDSSG2KUW7I -p tcp -m comment --comment "ishare/ishare-authentication:http-port" -m tcp -j DNAT --to-destination 10.244.1.57:8001
    -A KUBE-SEP-TUVO3SSFGJGVLTOK -s 10.244.3.53/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
    -A KUBE-SEP-TUVO3SSFGJGVLTOK -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.244.3.53:9153
    -A KUBE-SEP-UXA2BORWULZ3QPTK -s 10.244.4.30/32 -m comment --comment "ishare/rabbitmq:http-stats" -j KUBE-MARK-MASQ
    -A KUBE-SEP-UXA2BORWULZ3QPTK -p tcp -m comment --comment "ishare/rabbitmq:http-stats" -m tcp -j DNAT --to-destination 10.244.4.30:15672
    -A KUBE-SEP-V2AMTJ7AIOJFBJWI -s 10.244.4.26/32 -m comment --comment "ishare/tdengine:tcp6041" -j KUBE-MARK-MASQ
    -A KUBE-SEP-V2AMTJ7AIOJFBJWI -p tcp -m comment --comment "ishare/tdengine:tcp6041" -m tcp -j DNAT --to-destination 10.244.4.26:6041
    -A KUBE-SEP-VXEL2XCQPAU37WDR -s 10.244.3.19/32 -m comment --comment "ishare/tdengine:tcp6030" -j KUBE-MARK-MASQ
    -A KUBE-SEP-VXEL2XCQPAU37WDR -p tcp -m comment --comment "ishare/tdengine:tcp6030" -m tcp -j DNAT --to-destination 10.244.3.19:6030
    -A KUBE-SEP-W5JAMJZJI7HJMYT7 -s 10.244.8.10/32 -m comment --comment "ishare/redis-master:tcp-redis" -j KUBE-MARK-MASQ
    -A KUBE-SEP-W5JAMJZJI7HJMYT7 -p tcp -m comment --comment "ishare/redis-master:tcp-redis" -m tcp -j DNAT --to-destination 10.244.8.10:6379
    -A KUBE-SEP-XAT6W57LDBZGXZHC -s 10.244.1.58/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
    -A KUBE-SEP-XAT6W57LDBZGXZHC -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.1.58:53
    -A KUBE-SEP-XW5C57KTUJ7ZU2YX -s 10.244.3.53/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
    -A KUBE-SEP-XW5C57KTUJ7ZU2YX -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.3.53:53
    -A KUBE-SEP-Z4YCKRJHHWCLOMZQ -s 10.244.0.12/32 -m comment --comment "ingress-nginx/ingress-nginx-controller:https" -j KUBE-MARK-MASQ
    -A KUBE-SEP-Z4YCKRJHHWCLOMZQ -p tcp -m comment --comment "ingress-nginx/ingress-nginx-controller:https" -m tcp -j DNAT --to-destination 10.244.0.12:443
    -A KUBE-SERVICES -d 10.98.6.208/32 -p tcp -m comment --comment "ishare/ishare-report:http-port cluster IP" -j KUBE-SVC-DLLTBCJO5P4IMZYT
    -A KUBE-SERVICES -d 10.110.74.60/32 -p tcp -m comment --comment "ingress-nginx/ingress-nginx-controller:http cluster IP" -j KUBE-SVC-CG5I4G2RS3ZVWGLK
    -A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -j KUBE-SVC-TCOU7JCQXEZGVUNU
    -A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -j KUBE-SVC-JD5MR3NA4I4DYORP
    -A KUBE-SERVICES -d 10.101.228.233/32 -p tcp -m comment --comment "ishare/ishare-payment:http-port cluster IP" -j KUBE-SVC-5AIGOQAWHQWJGK3F
    -A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -j KUBE-SVC-NPX46M4PTMTKRN6Y
    -A KUBE-SERVICES -d 10.104.151.34/32 -p tcp -m comment --comment "ishare/ishare-generic:http-port cluster IP" -j KUBE-SVC-KOTPX4FFBKJ3EAPA
    -A KUBE-SERVICES -d 10.101.32.204/32 -p tcp -m comment --comment "ingress-nginx/ingress-nginx-controller-admission:https-webhook cluster IP" -j KUBE-SVC-EZYNCFY2F7N6OQA2
    -A KUBE-SERVICES -d 10.109.49.125/32 -p tcp -m comment --comment "ishare/mysql-secondary:mysql cluster IP" -j KUBE-SVC-Y2JLVXWGMJNEWTZD
    -A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -j KUBE-SVC-ERIFXISQEP7F7OF4
    -A KUBE-SERVICES -d 10.99.215.167/32 -p tcp -m comment --comment "ishare/rabbitmq:amqp cluster IP" -j KUBE-SVC-XGTLGVHTHRGIX6Y3
    -A KUBE-SERVICES -d 10.97.110.218/32 -p tcp -m comment --comment "ishare/ishare-authentication:http-port cluster IP" -j KUBE-SVC-P6EDUNCH36MVSA7G
    -A KUBE-SERVICES -d 10.99.59.22/32 -p tcp -m comment --comment "ishare/redis-master:tcp-redis cluster IP" -j KUBE-SVC-NMX4ZDKZALE5KEVE
    -A KUBE-SERVICES -d 10.102.197.192/32 -p tcp -m comment --comment "ishare/ishare-device:http-port cluster IP" -j KUBE-SVC-E6DCLSPEF4PFLSHX
    -A KUBE-SERVICES -d 10.96.152.98/32 -p tcp -m comment --comment "ishare/ishare-service-desk:http-port cluster IP" -j KUBE-SVC-72ZXZK3WL2NO2DIF
    -A KUBE-SERVICES -d 10.110.74.60/32 -p tcp -m comment --comment "ingress-nginx/ingress-nginx-controller:https cluster IP" -j KUBE-SVC-EDNDUDH2C75GIR6O
    -A KUBE-SERVICES -d 10.103.20.68/32 -p tcp -m comment --comment "ishare/mysql-primary:mysql cluster IP" -j KUBE-SVC-HQF4EG425QBR63QW
    -A KUBE-SERVICES -d 10.104.183.30/32 -p tcp -m comment --comment "kube-system/metrics-server:https cluster IP" -j KUBE-SVC-Z4ANX4WAEWEBLCTM
    -A KUBE-SERVICES -d 10.100.144.190/32 -p tcp -m comment --comment "ishare/tdengine:tcp6030 cluster IP" -j KUBE-SVC-7EJ2UJO7JY5Y4RI7
    -A KUBE-SERVICES -d 10.100.144.190/32 -p tcp -m comment --comment "ishare/tdengine:tcp6041 cluster IP" -j KUBE-SVC-FVKSXUFSB7ZOP2CX
    -A KUBE-SERVICES -d 10.99.215.167/32 -p tcp -m comment --comment "ishare/rabbitmq:epmd cluster IP" -j KUBE-SVC-KHB3CZCDQCMYK6SK
    -A KUBE-SERVICES -d 10.99.215.167/32 -p tcp -m comment --comment "ishare/rabbitmq:dist cluster IP" -j KUBE-SVC-A3U4QUBRTPAXX6KF
    -A KUBE-SERVICES -d 10.99.215.167/32 -p tcp -m comment --comment "ishare/rabbitmq:http-stats cluster IP" -j KUBE-SVC-SWIWJ3IRZBO72NNS
    -A KUBE-SERVICES -d 10.97.167.148/32 -p tcp -m comment --comment "ishare/redis-replicas:tcp-redis cluster IP" -j KUBE-SVC-BNQW7IJZ3AABAZQM
    -A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
    -A KUBE-SVC-5AIGOQAWHQWJGK3F ! -s 10.244.0.0/16 -d 10.101.228.233/32 -p tcp -m comment --comment "ishare/ishare-payment:http-port cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-5AIGOQAWHQWJGK3F -m comment --comment "ishare/ishare-payment:http-port -> 10.244.5.23:8009" -j KUBE-SEP-5M7IZ6GNSLNN4UYI
    -A KUBE-SVC-72ZXZK3WL2NO2DIF ! -s 10.244.0.0/16 -d 10.96.152.98/32 -p tcp -m comment --comment "ishare/ishare-service-desk:http-port cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-72ZXZK3WL2NO2DIF -m comment --comment "ishare/ishare-service-desk:http-port -> 10.244.5.21:8006" -j KUBE-SEP-GNJ6UONPSHKX42CQ
    -A KUBE-SVC-7EJ2UJO7JY5Y4RI7 ! -s 10.244.0.0/16 -d 10.100.144.190/32 -p tcp -m comment --comment "ishare/tdengine:tcp6030 cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-7EJ2UJO7JY5Y4RI7 -m comment --comment "ishare/tdengine:tcp6030 -> 10.244.3.19:6030" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-VXEL2XCQPAU37WDR
    -A KUBE-SVC-7EJ2UJO7JY5Y4RI7 -m comment --comment "ishare/tdengine:tcp6030 -> 10.244.4.26:6030" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-52YKHRQ5LIBEU4SK
    -A KUBE-SVC-7EJ2UJO7JY5Y4RI7 -m comment --comment "ishare/tdengine:tcp6030 -> 10.244.8.8:6030" -j KUBE-SEP-2AKMKIN6UXCGPSIT
    -A KUBE-SVC-A3U4QUBRTPAXX6KF ! -s 10.244.0.0/16 -d 10.99.215.167/32 -p tcp -m comment --comment "ishare/rabbitmq:dist cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-A3U4QUBRTPAXX6KF -m comment --comment "ishare/rabbitmq:dist -> 10.244.3.16:25672" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-RIASCGFSVB4YD4L6
    -A KUBE-SVC-A3U4QUBRTPAXX6KF -m comment --comment "ishare/rabbitmq:dist -> 10.244.4.30:25672" -j KUBE-SEP-6M2BJLQLMJYJH3IL
    -A KUBE-SVC-BNQW7IJZ3AABAZQM ! -s 10.244.0.0/16 -d 10.97.167.148/32 -p tcp -m comment --comment "ishare/redis-replicas:tcp-redis cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-BNQW7IJZ3AABAZQM -m comment --comment "ishare/redis-replicas:tcp-redis -> 10.244.3.54:6379" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-LJYPPBUDL34Y5M64
    -A KUBE-SVC-BNQW7IJZ3AABAZQM -m comment --comment "ishare/redis-replicas:tcp-redis -> 10.244.8.9:6379" -j KUBE-SEP-53IF7WEBFRCU3XBV
    -A KUBE-SVC-CG5I4G2RS3ZVWGLK ! -s 10.244.0.0/16 -d 10.110.74.60/32 -p tcp -m comment --comment "ingress-nginx/ingress-nginx-controller:http cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-CG5I4G2RS3ZVWGLK -m comment --comment "ingress-nginx/ingress-nginx-controller:http -> 10.244.0.12:80" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-A3PKFORORSGO3X6P
    -A KUBE-SVC-CG5I4G2RS3ZVWGLK -m comment --comment "ingress-nginx/ingress-nginx-controller:http -> 10.244.3.50:80" -j KUBE-SEP-RR5JKKVQXOPBH7TR
    -A KUBE-SVC-DLLTBCJO5P4IMZYT ! -s 10.244.0.0/16 -d 10.98.6.208/32 -p tcp -m comment --comment "ishare/ishare-report:http-port cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-DLLTBCJO5P4IMZYT -m comment --comment "ishare/ishare-report:http-port -> 10.244.4.58:8008" -j KUBE-SEP-2ETQ6YRBFHOP6RWH
    -A KUBE-SVC-E6DCLSPEF4PFLSHX ! -s 10.244.0.0/16 -d 10.102.197.192/32 -p tcp -m comment --comment "ishare/ishare-device:http-port cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-E6DCLSPEF4PFLSHX -m comment --comment "ishare/ishare-device:http-port -> 10.244.0.33:8004" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-4GRPFVPOMUQIZBG7
    -A KUBE-SVC-E6DCLSPEF4PFLSHX -m comment --comment "ishare/ishare-device:http-port -> 10.244.4.56:8004" -j KUBE-SEP-PNFALMF2KZLYSQR2
    -A KUBE-SVC-EDNDUDH2C75GIR6O ! -s 10.244.0.0/16 -d 10.110.74.60/32 -p tcp -m comment --comment "ingress-nginx/ingress-nginx-controller:https cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-EDNDUDH2C75GIR6O -m comment --comment "ingress-nginx/ingress-nginx-controller:https -> 10.244.0.12:443" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-Z4YCKRJHHWCLOMZQ
    -A KUBE-SVC-EDNDUDH2C75GIR6O -m comment --comment "ingress-nginx/ingress-nginx-controller:https -> 10.244.3.50:443" -j KUBE-SEP-DBIHFG6JQWPMNRPV
    -A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp -> 10.244.1.58:53" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-XAT6W57LDBZGXZHC
    -A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp -> 10.244.3.53:53" -j KUBE-SEP-XW5C57KTUJ7ZU2YX
    -A KUBE-SVC-EZYNCFY2F7N6OQA2 ! -s 10.244.0.0/16 -d 10.101.32.204/32 -p tcp -m comment --comment "ingress-nginx/ingress-nginx-controller-admission:https-webhook cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-EZYNCFY2F7N6OQA2 -m comment --comment "ingress-nginx/ingress-nginx-controller-admission:https-webhook -> 10.244.0.12:8443" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-3NJX7UWK2XR6D7T2
    -A KUBE-SVC-EZYNCFY2F7N6OQA2 -m comment --comment "ingress-nginx/ingress-nginx-controller-admission:https-webhook -> 10.244.3.50:8443" -j KUBE-SEP-3SQXK76CZKJH6HMR
    -A KUBE-SVC-FVKSXUFSB7ZOP2CX ! -s 10.244.0.0/16 -d 10.100.144.190/32 -p tcp -m comment --comment "ishare/tdengine:tcp6041 cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-FVKSXUFSB7ZOP2CX -m comment --comment "ishare/tdengine:tcp6041 -> 10.244.3.19:6041" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-MXRZBQNCU7KNR4TO
    -A KUBE-SVC-FVKSXUFSB7ZOP2CX -m comment --comment "ishare/tdengine:tcp6041 -> 10.244.4.26:6041" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-V2AMTJ7AIOJFBJWI
    -A KUBE-SVC-FVKSXUFSB7ZOP2CX -m comment --comment "ishare/tdengine:tcp6041 -> 10.244.8.8:6041" -j KUBE-SEP-H6AHBZAN5GUYQQH4
    -A KUBE-SVC-HQF4EG425QBR63QW ! -s 10.244.0.0/16 -d 10.103.20.68/32 -p tcp -m comment --comment "ishare/mysql-primary:mysql cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-HQF4EG425QBR63QW -m comment --comment "ishare/mysql-primary:mysql -> 10.244.8.11:3306" -j KUBE-SEP-OZE5UHYY2DV5MV3S
    -A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics -> 10.244.1.58:9153" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-MDKSB4XXNKCY6ZDU
    -A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics -> 10.244.3.53:9153" -j KUBE-SEP-TUVO3SSFGJGVLTOK
    -A KUBE-SVC-KHB3CZCDQCMYK6SK ! -s 10.244.0.0/16 -d 10.99.215.167/32 -p tcp -m comment --comment "ishare/rabbitmq:epmd cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-KHB3CZCDQCMYK6SK -m comment --comment "ishare/rabbitmq:epmd -> 10.244.3.16:4369" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-I2CRNUBK3EIYJKBR
    -A KUBE-SVC-KHB3CZCDQCMYK6SK -m comment --comment "ishare/rabbitmq:epmd -> 10.244.4.30:4369" -j KUBE-SEP-BYZQZ474W2WKSJWB
    -A KUBE-SVC-KOTPX4FFBKJ3EAPA ! -s 10.244.0.0/16 -d 10.104.151.34/32 -p tcp -m comment --comment "ishare/ishare-generic:http-port cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-KOTPX4FFBKJ3EAPA -m comment --comment "ishare/ishare-generic:http-port -> 10.244.3.52:8005" -j KUBE-SEP-CKVE2CLCR6PMYQRS
    -A KUBE-SVC-NMX4ZDKZALE5KEVE ! -s 10.244.0.0/16 -d 10.99.59.22/32 -p tcp -m comment --comment "ishare/redis-master:tcp-redis cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-NMX4ZDKZALE5KEVE -m comment --comment "ishare/redis-master:tcp-redis -> 10.244.8.10:6379" -j KUBE-SEP-W5JAMJZJI7HJMYT7
    -A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https -> 172.22.133.246:6443" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-AF2IEXFIFGILDOMA
    -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https -> 172.22.133.247:6443" -j KUBE-SEP-FWVUFG74L33JGL4N
    -A KUBE-SVC-P6EDUNCH36MVSA7G ! -s 10.244.0.0/16 -d 10.97.110.218/32 -p tcp -m comment --comment "ishare/ishare-authentication:http-port cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-P6EDUNCH36MVSA7G -m comment --comment "ishare/ishare-authentication:http-port -> 10.244.1.57:8001" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SG3ESIDSSG2KUW7I
    -A KUBE-SVC-P6EDUNCH36MVSA7G -m comment --comment "ishare/ishare-authentication:http-port -> 10.244.3.48:8001" -j KUBE-SEP-J4F57GIY2L4IFBBC
    -A KUBE-SVC-SWIWJ3IRZBO72NNS ! -s 10.244.0.0/16 -d 10.99.215.167/32 -p tcp -m comment --comment "ishare/rabbitmq:http-stats cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-SWIWJ3IRZBO72NNS -m comment --comment "ishare/rabbitmq:http-stats -> 10.244.3.16:15672" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-GYP2J5ZBCTPVZLQU
    -A KUBE-SVC-SWIWJ3IRZBO72NNS -m comment --comment "ishare/rabbitmq:http-stats -> 10.244.4.30:15672" -j KUBE-SEP-UXA2BORWULZ3QPTK
    -A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.244.1.58:53" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-GYPSBZU5PWL5Q4SA
    -A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.244.3.53:53" -j KUBE-SEP-OQ77EA3T2F2AQDCR
    -A KUBE-SVC-XGTLGVHTHRGIX6Y3 ! -s 10.244.0.0/16 -d 10.99.215.167/32 -p tcp -m comment --comment "ishare/rabbitmq:amqp cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-XGTLGVHTHRGIX6Y3 -m comment --comment "ishare/rabbitmq:amqp -> 10.244.3.16:5672" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-M25LFUXNV4BGLBYM
    -A KUBE-SVC-XGTLGVHTHRGIX6Y3 -m comment --comment "ishare/rabbitmq:amqp -> 10.244.4.30:5672" -j KUBE-SEP-NNT5TGOFWGEP6HG2
    -A KUBE-SVC-Y2JLVXWGMJNEWTZD ! -s 10.244.0.0/16 -d 10.109.49.125/32 -p tcp -m comment --comment "ishare/mysql-secondary:mysql cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-Y2JLVXWGMJNEWTZD -m comment --comment "ishare/mysql-secondary:mysql -> 10.244.4.27:3306" -j KUBE-SEP-23QMUZO34QB543PV
    -A KUBE-SVC-Z4ANX4WAEWEBLCTM ! -s 10.244.0.0/16 -d 10.104.183.30/32 -p tcp -m comment --comment "kube-system/metrics-server:https cluster IP" -j KUBE-MARK-MASQ
    -A KUBE-SVC-Z4ANX4WAEWEBLCTM -m comment --comment "kube-system/metrics-server:https -> 10.244.3.13:10250" -j KUBE-SEP-ABJFV2KETYMQW2T3
    COMMIT
    # Completed on Thu Dec 12 10:59:03 2024
    

Test cases

1. Pod try to access another pod via Service in the same node

# nslookup success
ishare@hs-red:/var/app/ishare$ k exec -it nettools-purple -- bash -c 'nslookup ishare-payment.ishare.svc.cluster.local'
Server:		10.96.0.10
Address:	10.96.0.10#53

Name:	ishare-payment.ishare.svc.cluster.local
Address: 10.101.228.233

# test port failed with service name
ishare@hs-red:/var/app/ishare$ k exec -it nettools-purple -- bash -c 'nc -z -w 3 ishare-payment.ishare.svc.cluster.local 8009 || echo failed'
failed

# test port failed with service ip
ishare@hs-red:/var/app/ishare$ k exec -it nettools-purple -- bash -c 'nc -z -w 3 10.101.228.233 8009 || echo failed'
failed

2. Pod try to access another pod via pod ip in the same node

# test port success with pod ip
ishare@hs-red:/var/app/ishare$ k exec -it nettools-purple -- bash -c 'nc -z -w 3 10.244.5.23 8009 && echo ok'
Connection to 10.244.5.23 8009 port [tcp/*] succeeded!
ok

3. Pod try to access another pod in different node via Service

# test port from hs-blue to hs-purple via service, works fine
ishare@hs-red:/var/app/debug$ k exec -it nettools-blue -- bash -c 'nc -z -w 3 ishare-payment.ishare.svc.cluster.local 8009 && echo ok'
Connection to ishare-payment.ishare.svc.cluster.local (10.101.228.233) 8009 port [tcp/*] succeeded!
ok

# test port from hs-orange to hs-purple via service, works fine
ishare@hs-red:/var/app/debug$ k exec -it nettools-orange -- bash -c 'nc -z -w 3 ishare-payment.ishare.svc.cluster.local 8009 && echo ok'
Connection to ishare-payment.ishare.svc.cluster.local (10.101.228.233) 8009 port [tcp/*] succeeded!
ok

Thanks for your help first, yes the file /etc/resolv.conf looks good, you can find the output from my investigation steps as above.

You posted an extraordinary amount of information, but it’s still hard to find the problem. IIUC it’s:

Pods on the purple node can’t access services, by name or IP, but pods on other nodes can.

Is that right? I can’t say for sure why - the iptables-save looks to be the victim of the iptables version problem - the tool in the kube-proxy is a different version than on your host, so it mis-parses the kernel data (note no --dport clauses).

I think you might need to install tcpdump and see which interfaces are observing the traffic on purple. Clearly SOMETHING is wrong there.

On hs-orange

kubectl logs -n kube-flannel kube-flannel-ds-5j974
iptables --version
kubectl get networkpolicy --all-namespaces

cat /etc/cni/net.d/10-flannel.conflist
This should have 10.244.0.0/16

iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
systemctl restart kubelet

Ensure correct cidr here

kubectl get cm -n kube-system kube-proxy -o yaml | grep clusterCIDR
tcpdump -i any host 10.101.228.233 and port 8009

On all nodes check and compare with hs-orange

ip link show | grep mtu