As I stumble my way through learning the basics of K8, my main concern continues be around the networking features and flow (given I’m a network guy). I know the three service types; clusterIP, NodePort, and LoadBalancer.
All the documentation and lectures for K8 services points to the method of NodePort being the only way of achieving a very limited external access, without introducing a LoadBalancer service or an ingress controller. However in practice I’ve noticed something different. The following is my current deployment:
Where I just have a single apache httpd image running on 3 nodes, and the client is a completely external to the K8 cluster, as it is it’s own VM.
[root@master-node captures]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-node Ready master 4d6h v1.19.4 1.1.1.1 <none> CentOS Linux 7 (Core) 3.10.0-1160.2.2.el7.x86_64 docker://19.3.13
node1 Ready <none> 4d5h v1.19.4 1.1.1.2 <none> CentOS Linux 7 (Core) 3.10.0-1160.2.2.el7.x86_64 docker://19.3.13
node2 Ready <none> 4h v1.19.4 1.1.1.3 <none> CentOS Linux 7 (Core) 3.10.0-1160.2.2.el7.x86_64 docker://19.3.13
node3 Ready <none> 4h7m v1.19.4 1.1.1.4 <none> CentOS Linux 7 (Core) 3.10.0-1160.2.2.el7.x86_64 docker://19.3.13
[root@master-node captures]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
apache-deployment-668bffcfd5-t6b56 1/1 Running 0 38m 10.244.3.13 node3 <none> <none>
apache-deployment-668bffcfd5-ts75b 1/1 Running 0 38m 10.244.2.8 node2 <none> <none>
apache-deployment-668bffcfd5-x74s4 1/1 Running 0 38m 10.244.1.25 node1 <none> <none>
[root@master-node captures]# kubectl get svc apache-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apache-service NodePort 10.96.135.145 <none> 80:31000/TCP 54m
What I’ve noticed is that I can set a route on the client for the subnet of the cluster-IP of the NodePort service (10.96.0.0/16) pointing towards the master node:
root@client1:~# ifconfig eth1
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 1.1.1.5 netmask 255.255.255.0 broadcast 1.1.1.255
root@client1:~# ip route | grep 1.1.1.1
10.96.0.0/16 via 1.1.1.1 dev eth1
And in doing so, I can not only access via the cluster-IP of the NodePort service, but I can do so with the native HTTP port 80:
root@client1:~# curl 10.96.135.145
<html><body><h1>It works!</h1></body></html>
root@client1:~# curl 10.96.135.145
<html><body><h1>It works!</h1></body></html>
root@client1:~# curl 10.96.135.145
<html><body><h1>It works!</h1></body></html>
root@client1:~# curl 10.96.135.145
<html><body><h1>It works!</h1></body></html>
Not only that, I can see via a tcpdump on the master that this traffic does appear to be load balanced across the nodes. This really confuses me as this appears to eliminate every pitfall of the limitations of something like NodePort, yet I haven’t seen this use case discussed anywhere. Is there some major limitation to this flow that I’m missing?