On premise: Can not understand how external ips works

Hi, forgive me for my bad english.

I’m tying to setup a k8s (1.13.0) cluster on premise using 5 vmware servers and
kubespray: 3 master and 2 nodes
I can spawn a pod “hello world” using this yaml:

apiVersion: v1
kind: Service
metadata:
  name: hello-world
spec:
  type: LoadBalancer
  ports:
  - port: 8001
    targetPort: 8080
  selector:
    app: hello-world
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-world
  template:
    metadata:
      labels:
        app: hello-world
    spec:
      containers:
      - name: hello-world
        image: paulbouwer/hello-kubernetes:1.5
        ports:
        - containerPort: 8080

The pod is running on one node, but not reachable from outside (External IP
is in pending state). After searching over the net, I patched the service to assign the real ip
of the node to the service:

kubectl patch service hello-world -p '{"spec":{"externalIPs":["192.168.10.201"]}}'

Immediatly after that, The node becomes NotReady:

NAME           STATUS     ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME
kube-master1   Ready      master   28h   v1.13.0   192.168.10.101   <none>        CentOS Linux 7 (Core)   3.10.0-957.1.3.el7.x86_64   docker://18.6.1
kube-master2   Ready      master   28h   v1.13.0   192.168.10.102   <none>        CentOS Linux 7 (Core)   3.10.0-957.1.3.el7.x86_64   docker://18.6.1
kube-master3   Ready      master   28h   v1.13.0   192.168.10.103   <none>        CentOS Linux 7 (Core)   3.10.0-957.1.3.el7.x86_64   docker://18.6.1
kube-node1     NotReady   node     28h   v1.13.0   192.168.10.201   <none>        CentOS Linux 7 (Core)   3.10.0-957.1.3.el7.x86_64   docker://18.6.1
kube-node2     Ready      node     28h   v1.13.0   192.168.10.202   <none>        CentOS Linux 7 (Core)   3.10.0-957.1.3.el7.x86_64   docker://18.6.1

It seems that ke kubelet cannot reach the
localhost:6443 service. but the port is runnng on the node.

tcp        0      0 [127.0.0.1:6443](http://127.0.0.1:6443/)          0.0.0.0:*               LISTEN 
     35993/nginx: master

The logs on the node:

Dec 27 06:39:49 kube-node1 kubelet: I1227 06:39:49.772280   37668 kubelet.go
:1953] SyncLoop (PLEG): 
"hello-world-564ccf44cd-7ls5s_default(34577622-09a2-11e9-bdcf-000c29c8ea92)"
, event: &amp;pleg.PodLifecycleEvent{ID:"34577622-09a2-11e9-bdcf-000c29c8ea92", 
Type:"ContainerStarted", Data:
"f6075992d16eea03d6ae2db372e0b5d0e67a635ae4e9bca1231766baa7cd48a9"}
Dec 27 06:39:58 kube-node1 kubelet: I1227 06:39:58.004714   37668 setters.go
:72] Using node IP: "192.168.10.201"
Dec 27 06:40:08 kube-node1 kubelet: I1227 06:40:08.019561   37668 setters.go
:72] Using node IP: "192.168.10.201"
Dec 27 06:40:18 kube-node1 kubelet: I1227 06:40:18.033048   37668 setters.go
:72] Using node IP: "192.168.10.201"
Dec 27 06:40:28 kube-node1 kubelet: I1227 06:40:28.067913   37668 setters.go
:72] Using node IP: "192.168.10.201"
Dec 27 06:40:38 kube-node1 kubelet: I1227 06:40:38.079062   37668 setters.go
:72] Using node IP: "192.168.10.201"
Dec 27 06:40:48 kube-node1 kubelet: I1227 06:40:48.095218   37668 setters.go
:72] Using node IP: "192.168.10.201"
Dec 27 06:41:08 kube-node1 kubelet: E1227 06:41:08.110480   37668 
kubelet_node_status.go:380] Error updating node status, will retry: error 
getting node "kube-node1": Get https://localhost:6443/api/v1/nodes/kube-node1?resourceVersion=0&amp;timeout=10s: 
context deadline exceeded
Dec 27 06:41:18 kube-node1 kubelet: E1227 06:41:18.110915   37668 
kubelet_node_status.go:380] Error updating node status, will retry: error 
getting node "kube-node1": Get https://localhost:6443/api/v1/nodes/kube-node1?timeout=10s: 
net/http: request canceled (Client.Timeout exceeded while awaiting headers)
...

Any idea ? Thanks for your help.

Service Type LoadBalancer requires some form of external provider to map external IPs to services. When deploying in a cloud provider, this is usually a provisioned external Load Balancer (e.g. AWS ELB).

For on-prem deployments, options are somewhat limited. A good option is metalLB which can manage an external IP pool via BGP or layer2 (ARP). We’ve used it in production for some time with no problems :slight_smile:

I have tried the NodePort too without any success. Same problem.
I will read about metalb. Thanks !!

very good. metalb is running and I can now get loadbalancers IPs automatically from metalb. Thanks a lot for your help !!

I’m watching about BGP. Actually I use Layer 2 mode.

Hey, i have the same setup like you but i have a question.

My exposed Service with type LoadBalancer has the ip 192.168.1.241 now (came from metallb).

I can curl 192.168.1.241 on the master machine but i want to access this 192.168.1.241 with my external ip (public to the internet) now.

How can i do that? I tried it with iptables, but it isnt working.

It sort of depends on how your network is setup, but for the most common scenario you will need to do a manual NAT mapping from the external IP to the private IP.

the nat (or pat) mappings can be done using a router or something similar :wink:

I access my Master Server via one ip a.b.c.d. (the ip is public to the internet)

How can i route the a.b.c.d. to my 192.168.1.241 ?

Do you have an example? Sorry but i am trying since two days…

I tried with DNAT/SNAT rules on iptables but it is not working.

i want to curl 192.168.1.241 on my local machine to access the LoadBalancer with the ip 192.168.1.241.

Your server is directly exposed on the internet ? The public IP is directly atatched to the server ?
If yes, follow this to get it working (port forwarding using iptables): https://www.systutorials.com/816/port-forwarding-using-iptables/

otherwise, you have to configure the Port forwarding (NAT/PAT) on the router connected to the internet.

Hi All – I have similar type of set-up. We have on-prem cluster set up with 1 master and 2 worker and metallb with calico networking… Issue is that our application is not opening from outside world.

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default test-1 LoadBalancer a.b.c.d e.f.g.h 80:31989/TCP 107m

I can curl e.f.g.h from my cluster but can’t open from browser.

Just FYI, our cluster IP range is different from configmap address-pool… Plz assist.

Thanks
Anuj

Is the range of IPs you gave to metallb routable on your main network? If so, it should work out of the box when in layer 2 mode. If you’re using BGP mode, you will want to look over the issues with metallb and calico. It requires some other considerations and planning.

Thanks mrbobbytables for your reply. I am using layer 2 mode and the IP range of master & worker nodes is different from configmap address pool -

Master/worker node --> 192.168.X.X
Metallb config.yaml -->

apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.49.X.X

I want to know if the address range in metallb config.yaml needs to be same as cluster network ? Because when I give kubernetes cluster IP range, application opens from outside, otherwise not.

I am stuck since a week. Please help !!

Thanks
Anuj Gupta

It should not be the same network as your pod or service ip ranges. It should be in the same range as your master/worker (192.168.x.x ) nodes or another range that is routable on the host’s network.