Ipvs round robin problem causing jetty request being distributed in an unbalanced way

Installation method: On prem installation via kubespray v1.8.5
Host OS: centos7
CNI and version:

calicoctl version

Client Version: v3.1.3
Build date: 2018-05-30T17:15:59+0000
Git commit: 231083c2
Cluster Version: v3.1.3
Cluster Type: kubespray,bgp,k8s
CRI and version:

Here is a Grafana dashboard that is created using Prometheus datasource. Prometheus is configured to collect jvm metrics from Springboot application that hac micrometer library enabled. And the dashboard is displaying that one pod is taking more requests than others.
37

At one of the workers:

watch -n 1 ‘ipvsadm -Ln --stats --rate | grep -A5 10.35.73.56:31075’

Every 1.0s: ipvsadm -Ln --stats --rate | grep -A5 10.35.73.56:31075 Mon Jun 10 20:58:54 2019

TCP 10.35.73.56:31075 7960932 152227K 99464154 22531M 53934M
→ 10.233.66.105:8080 707253 31566147 20559806 5351M 12813M
→ 10.233.68.30:8080 1280351 11372996 7594816 1239M 2276M
→ 10.233.89.207:8080 1280859 25865302 16742211 3923M 9576M
→ 10.233.89.208:8080 1280288 24655031 15959408 3699M 9045M
→ 10.233.111.222:8080 1280804 11397386 7601821 1243M 2290M

watch -n 1 ‘ipvsadm -Ln | grep -A5 10.35.73.56:31075’

Every 1.0s: ipvsadm -Ln | grep -A5 10.35.73.56:31075 Mon Jun 10 20:59:51 2019

TCP 10.35.73.56:31075 rr
→ 10.233.66.105:8080 Masq 1 4 19
→ 10.233.68.30:8080 Masq 1 0 18
→ 10.233.89.207:8080 Masq 1 0 24
→ 10.233.89.208:8080 Masq 1 0 20
→ 10.233.111.222:8080 Masq 1 0 28

The service is defined as NodePort and all the workers are added to an external LB’s pool with least connection.

I tried to change the lb method via ipvsadm -E -t 10.35.73.56:31075 -s lc but still ipvsadm -Ln displayed the service as round robin.

When i deleted the problematic port, requests are distributed equally. I am not using any ingress.
Any idea how can i fix this ipvs lb problem?