Node did not forward message packet to pod and sent [RST] packet to the message broker

I am running a k8s cluster (3 nodes) using Microk8s. I have some applications running on Kubernetes pod communicating with a Solace Message Broker which is outside the k8s cluster. Suddenly, the node did not forward the message from message broker to the pod and sent [RST] to the message broker (Refer to 73105 and 73106). May I know what cause the node did not forward message packet (73105) and send [RST] packet to the message broker?

  • 192.168.100.153 (k8s Node Address)
  • 10.1.22.138 (k8s Pod Address)
  • 192.168.99.100 (Solace Message Broker Address)
No.     Time                          Source                Destination           Protocol Length Info
  73097 2022-05-28 03:15:49.111502    192.168.99.100       192.168.100.153       AMQP     76     (empty)
  73098 2022-05-28 03:15:49.111566    192.168.99.100       10.1.22.138           AMQP     76     (empty)
  73099 2022-05-28 03:15:49.111587    10.1.22.138           192.168.99.100       TCP      68     48296 → 5672 [ACK] Seq=1 Ack=146201 Win=501 Len=0 TSval=2957547792 TSecr=1008294801
  73100 2022-05-28 03:15:49.111631    192.168.100.153       192.168.99.100       TCP      68     16886 → 5672 [ACK] Seq=1 Ack=146201 Win=501 Len=0 TSval=2957547792 TSecr=1008294801
  73101 2022-05-28 03:15:51.112852    192.168.99.100       192.168.100.153       AMQP     76     (empty)
  73102 2022-05-28 03:15:51.112930    192.168.99.100       10.1.22.138           AMQP     76     (empty)
  73103 2022-05-28 03:15:51.112988    10.1.22.138           192.168.99.100       TCP      68     48296 → 5672 [ACK] Seq=1 Ack=146209 Win=501 Len=0 TSval=2957549793 TSecr=1008296802
  73104 2022-05-28 03:15:51.113029    192.168.100.153       192.168.99.100       TCP      68     16886 → 5672 [ACK] Seq=1 Ack=146209 Win=501 Len=0 TSval=2957549793 TSecr=1008296802
  73105 2022-05-28 03:15:53.114733    192.168.99.100       192.168.100.153       AMQP     76     (empty)
  73106 2022-05-28 03:15:53.115263    192.168.100.153       192.168.99.100       TCP      56     16886 → 5672 [RST] Seq=1 Win=0 Len=0

I have a problem very similar to yours.
In my k8s cluster, I can’t ping pod IP from master node, but it works in slave node. I don’t know how to resolve this.

My problem is intermittent disconnection. I still thinking how to troubleshoot the issue. I was able to ping the pod ip from one of the master nodes… What is your microk8s configuration? Do you update any kernel configuration for the nodes?

My Microk8s Configuration
microk8s is running
high-availability: yes
datastore master nodes: 192.168.100.151:19001 192.168.100.152:19001 192.168.100.153:19001
datastore standby nodes: none
addons:
enabled:
dashboard # (core) The Kubernetes dashboard
dns # (core) CoreDNS
ha-cluster # (core) Configure high availability on the current node
ingress # (core) Ingress controller for external access
metallb # (core) Loadbalancer for your Kubernetes cluster
metrics-server # (core) K8s Metrics Server for API access to service metrics
registry # (core) Private image registry exposed on localhost:32000
disabled:
community # (core) The community addons repository
gpu # (core) Automatic enablement of Nvidia CUDA
helm # (core) Helm 2 - the package manager for Kubernetes
helm3 # (core) Helm 3 - Kubernetes package manager
host-access # (core) Allow Pods connecting to Host services smoothly
hostpath-storage # (core) Storage class; allocates storage from host directory
mayastor # (core) OpenEBS MayaStor
prometheus # (core) Prometheus operator for monitoring and logging
rbac # (core) Role-Based Access Control for authorisation
storage # (core) Alias to hostpath-storage add-on, deprecated