On premise: Can not understand how external ips works

metallb
service
on-prem
loadbalancer

#1

Hi, forgive me for my bad english.

I’m tying to setup a k8s (1.13.0) cluster on premise using 5 vmware servers and
kubespray: 3 master and 2 nodes
I can spawn a pod “hello world” using this yaml:

apiVersion: v1
kind: Service
metadata:
  name: hello-world
spec:
  type: LoadBalancer
  ports:
  - port: 8001
    targetPort: 8080
  selector:
    app: hello-world
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-world
  template:
    metadata:
      labels:
        app: hello-world
    spec:
      containers:
      - name: hello-world
        image: paulbouwer/hello-kubernetes:1.5
        ports:
        - containerPort: 8080

The pod is running on one node, but not reachable from outside (External IP
is in pending state). After searching over the net, I patched the service to assign the real ip
of the node to the service:

kubectl patch service hello-world -p '{"spec":{"externalIPs":["192.168.10.201"]}}'

Immediatly after that, The node becomes NotReady:

NAME           STATUS     ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME
kube-master1   Ready      master   28h   v1.13.0   192.168.10.101   <none>        CentOS Linux 7 (Core)   3.10.0-957.1.3.el7.x86_64   docker://18.6.1
kube-master2   Ready      master   28h   v1.13.0   192.168.10.102   <none>        CentOS Linux 7 (Core)   3.10.0-957.1.3.el7.x86_64   docker://18.6.1
kube-master3   Ready      master   28h   v1.13.0   192.168.10.103   <none>        CentOS Linux 7 (Core)   3.10.0-957.1.3.el7.x86_64   docker://18.6.1
kube-node1     NotReady   node     28h   v1.13.0   192.168.10.201   <none>        CentOS Linux 7 (Core)   3.10.0-957.1.3.el7.x86_64   docker://18.6.1
kube-node2     Ready      node     28h   v1.13.0   192.168.10.202   <none>        CentOS Linux 7 (Core)   3.10.0-957.1.3.el7.x86_64   docker://18.6.1

It seems that ke kubelet cannot reach the
localhost:6443 service. but the port is runnng on the node.

tcp        0      0 [127.0.0.1:6443](http://127.0.0.1:6443/)          0.0.0.0:*               LISTEN 
     35993/nginx: master

The logs on the node:

Dec 27 06:39:49 kube-node1 kubelet: I1227 06:39:49.772280   37668 kubelet.go
:1953] SyncLoop (PLEG): 
"hello-world-564ccf44cd-7ls5s_default(34577622-09a2-11e9-bdcf-000c29c8ea92)"
, event: &amp;pleg.PodLifecycleEvent{ID:"34577622-09a2-11e9-bdcf-000c29c8ea92", 
Type:"ContainerStarted", Data:
"f6075992d16eea03d6ae2db372e0b5d0e67a635ae4e9bca1231766baa7cd48a9"}
Dec 27 06:39:58 kube-node1 kubelet: I1227 06:39:58.004714   37668 setters.go
:72] Using node IP: "192.168.10.201"
Dec 27 06:40:08 kube-node1 kubelet: I1227 06:40:08.019561   37668 setters.go
:72] Using node IP: "192.168.10.201"
Dec 27 06:40:18 kube-node1 kubelet: I1227 06:40:18.033048   37668 setters.go
:72] Using node IP: "192.168.10.201"
Dec 27 06:40:28 kube-node1 kubelet: I1227 06:40:28.067913   37668 setters.go
:72] Using node IP: "192.168.10.201"
Dec 27 06:40:38 kube-node1 kubelet: I1227 06:40:38.079062   37668 setters.go
:72] Using node IP: "192.168.10.201"
Dec 27 06:40:48 kube-node1 kubelet: I1227 06:40:48.095218   37668 setters.go
:72] Using node IP: "192.168.10.201"
Dec 27 06:41:08 kube-node1 kubelet: E1227 06:41:08.110480   37668 
kubelet_node_status.go:380] Error updating node status, will retry: error 
getting node "kube-node1": Get https://localhost:6443/api/v1/nodes/kube-node1?resourceVersion=0&amp;timeout=10s: 
context deadline exceeded
Dec 27 06:41:18 kube-node1 kubelet: E1227 06:41:18.110915   37668 
kubelet_node_status.go:380] Error updating node status, will retry: error 
getting node "kube-node1": Get https://localhost:6443/api/v1/nodes/kube-node1?timeout=10s: 
net/http: request canceled (Client.Timeout exceeded while awaiting headers)
...

Any idea ? Thanks for your help.


#2

Service Type LoadBalancer requires some form of external provider to map external IPs to services. When deploying in a cloud provider, this is usually a provisioned external Load Balancer (e.g. AWS ELB).

For on-prem deployments, options are somewhat limited. A good option is metalLB which can manage an external IP pool via BGP or layer2 (ARP). We’ve used it in production for some time with no problems :slight_smile:


Kubernetes On Premise Expose using External IP
#3

I have tried the NodePort too without any success. Same problem.
I will read about metalb. Thanks !!


#4

very good. metalb is running and I can now get loadbalancers IPs automatically from metalb. Thanks a lot for your help !!

I’m watching about BGP. Actually I use Layer 2 mode.


#5

Hey, i have the same setup like you but i have a question.

My exposed Service with type LoadBalancer has the ip 192.168.1.241 now (came from metallb).

I can curl 192.168.1.241 on the master machine but i want to access this 192.168.1.241 with my external ip (public to the internet) now.

How can i do that? I tried it with iptables, but it isnt working.


#6

It sort of depends on how your network is setup, but for the most common scenario you will need to do a manual NAT mapping from the external IP to the private IP.


#7

the nat (or pat) mappings can be done using a router or something similar :wink:


#8

I access my Master Server via one ip a.b.c.d. (the ip is public to the internet)

How can i route the a.b.c.d. to my 192.168.1.241 ?

Do you have an example? Sorry but i am trying since two days…

I tried with DNAT/SNAT rules on iptables but it is not working.

i want to curl 192.168.1.241 on my local machine to access the LoadBalancer with the ip 192.168.1.241.


#9

Your server is directly exposed on the internet ? The public IP is directly atatched to the server ?
If yes, follow this to get it working (port forwarding using iptables): https://www.systutorials.com/816/port-forwarding-using-iptables/

otherwise, you have to configure the Port forwarding (NAT/PAT) on the router connected to the internet.