Kubewatch - no connectivity from pods to api server

I’m working on deploying Kubewatch (https://github.com/bitnami-labs/kubewatch) which deploys a pod that connects to the api server to watch for events. The pod is having trouble connecting with the api server and is throwing errors such as these:

ERROR: logging before flag.Parse: E1217 21:36:38.629363 1 reflector.go:205] github.com/bitnami-labs/kubewatch/pkg/controller/controller.go:377: Failed to list *v1.ReplicationController: Get https://10.96.0.1:443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E1217 21:36:38.639173 1 reflector.go:205] github.com/bitnami-labs/kubewatch/pkg/controller/controller.go:377: Failed to list *v1beta1.Deployment: Get https://10.96.0.1:443/apis/apps/v1beta1/deployments?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

To troubleshoot, I brought up a few more pods and what I found is that I cannot do a curl to the api server (10.96.0.1) from those pods. The connection times out, no HTTP response whatsoever. I am able to do a curl directly from the nodes, there I get a HTTP code 403 which I think is expected since I am not passing any credentials.

I don’t have another cluster to verify against at the moment, so first question is it normal that a pod cannot connect with the api server?

The environment I am working in consists of two CentOS 7 VMs, one master and one node running K8S 1.11.5. The firewall is disabled, SELINUX is disabled, and the following settings have been made on the master and node:

[root@kube-acitest-3 ~]# sysctl -p
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1

Here’s the service for the API server:
root@kube-acitest-3 ~]# kubectl get service kubernetes
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 9d

[root@kube-acitest-3 ~]# kubectl get service kubernetes -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-12-08T14:42:28Z
labels:
component: apiserver
provider: kubernetes
name: kubernetes
namespace: default
resourceVersion: “31”
selfLink: /api/v1/namespaces/default/services/kubernetes
uid: 7c4bf784-faf7-11e8-8f86-005056863a6e
spec:
clusterIP: 10.96.0.1
ports:

  • name: https
    port: 443
    protocol: TCP
    targetPort: 6443
    sessionAffinity: None
    type: ClusterIP
    status:
    loadBalancer: {}

My understanding is that traffic to the service hits an iptables NAT rule translating the 10.96.0.1 to the IP of the master node which is 10.10.51.215 (correct?). Although I am not very familiar with iptables rules, I think this is the relevant configuration:

[root@kube-acitest-4 ~]# iptables-save | grep default/kubernetes
-A KUBE-SEP-YSLNNPNL7BWNCZFE -s 10.10.51.215/32 -m comment --comment “default/kubernetes:https” -j KUBE-MARK-MASQ
-A KUBE-SEP-YSLNNPNL7BWNCZFE -p tcp -m comment --comment “default/kubernetes:https” -m tcp -j DNAT --to-destination 10.10.51.215:6443
-A KUBE-SERVICES ! -s 172.20.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment “default/kubernetes:https cluster IP” -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment “default/kubernetes:https cluster IP” -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment “default/kubernetes:https” -j KUBE-SEP-YSLNNPNL7BWNCZFE

Here are the pods that are unable to connect with the api server:
[root@kube-acitest-3 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
my-release-kubewatch-55c478f498-shdpc 1/1 Running 2 3h 172.20.0.66 kube-acitest-4
netbox-6ccb545d47-2pl7d 1/1 Running 2 3h 172.20.0.64 kube-acitest-4
netbox-6ccb545d47-brxp2 1/1 Running 2 3h 172.20.0.65 kube-acitest-4

Appreciate any and all help and please let me know what other output or information would be useful to debug this.

Please create rbac for the same. Servie account, role , role binding and mention the service account name in pod yaml file. Thanks