Addon: MetalLB

1.17
Compatibility: amd64 arm64 power s390 strict classic
Source: MetalLB

:warning: Note that currently this addon does not work under Multipass on macOS, due to filtering that macOS applies to network traffic.

MetalLB Loadbalancer is a network LB implementation that tries to “just work” on bare metal clusters.

When you enable this add on you will be asked for an IP address pool that MetalLB will hand out IPs from:

microk8s enable metallb

Alternatively you can provide the IP address pool in the enable command:

microk8s enable metallb:10.64.140.43-10.64.140.49

Multiple comma-separated ranges as well as CIDR notation metallb:10.64.140.43-10.64.140.49,10.64.141.53-10.64.141.59,10.12.13.0/24`) are supported from 1.19.

Configure IPAddressPool resources (from 1.25+)

It is possible to configure IP address pools that MetalLB will use to allocate IP addresses using custom resources.

For example, create the following custom address pool:

# addresspool.yaml
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: custom-addresspool
  namespace: metallb-system
spec: 
  addresses:
  - 192.168.1.1-192.168.1.100

And apply it with:

microk8s kubectl apply -f addresspool.yaml

You can then configure which address pool MetalLB will use for each LoadBalancer service by setting the metallb.universe.tf/address-pool annotation:

apiVersion: v1
kind: Service
metadata:
  name: test-service
  annotations:
    metallb.universe.tf/address-pool: custom-addresspool
spec:
  selector:
    name: nginx
  type: LoadBalancer
  # loadBalancerIP is optional. MetalLB will automatically allocate an IP 
  # from its pool if not specified. You can also specify one manually.
  # loadBalancerIP: x.y.z.a
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

Setting up a MetalLB/Ingress service

For load balancing in a MicroK8s cluster, MetalLB can make use of Ingress. Make sure you have enabled ingress, with microk8s enable ingress and create a suitable ingress service, for example:

apiVersion: v1
kind: Service
metadata:
  name: ingress
  namespace: ingress
spec:
  selector:
    name: nginx-ingress-microk8s
  type: LoadBalancer
  # loadBalancerIP is optional. MetalLB will automatically allocate an IP 
  # from its pool if not specified. You can also specify one manually.
  # loadBalancerIP: x.y.z.a
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
    - name: https
      protocol: TCP
      port: 443
      targetPort: 443

You can save this file as ingress-service.yaml and then apply it with:

microk8s kubectl apply -f ingress-service.yaml

Now there is a load-balancer which listens on an arbitrary IP and directs traffic towards one of the listening ingress controllers.

Advertise LoadBalancer IPs

By default, MicroK8s advertises all LoadBalancer IPs by responding to ARP requests on the local network. For more complex setups, like limiting the address pools for which ARP responses are sent, or for more complex BGP configurations, refer to the MetalLB documentation

4 Likes

What’s the recommended approach to using metallb on production. I come from an EKS set up earlier, and had one AWS network loadbalancer pointing to main cluster IP.

Should I be using the public IP of one of the three nodes (since there is no master node anymore), as A record to point to the domain?

For example, in AWS, If I reboot the EC2 instance, the public IP of the machine is gone, I suppose I should be using a reserved public IP in one of the nodes machines (pseudo-master node)?

Using a node IP to access you cluster creates a single point of failure. I think you should try to integrate with the LB of the cloud you are on and if there is no cloud substrate use metallb with a floating IP. It would have been great if we had a few such use-cases as part of our docs or linked from the official docs.

Hi, probably what I am doing might advance the example you’re looking for but I may need to know more first.

My node is a Hetzner AX51.

It has just one public IP address. When MetalLB asks for an IP address range, is it looking for IPs to assign to containers, and then it will map to the public IP address?

Or am I not getting it?

I would add to the doc something along the lines of:

You can confirm addon is ready using:
sh -c "until microk8s.kubectl rollout status daemonset.apps/speaker -n metallb-system -w; do sleep 5; done"

Much like @faddat before, I’m using a single VM to run microk8s. When I enable metallb, what exactly should I use for range? The public IP?

Thanks

You would use a range of unused IPs (one IP per load balancer service you’ll be creating) in the internal subnet. You can then use NAT or whatever to reach them from the outside.

I mostly just use one IP, and set up nginx-ingress-microk8s as described at the top of this article, to which I then forward ports 80 & 443 from my public IP.

A load balancer’s job is to distribute traffic it receives to several nodes. The load balancer needs to know the IPs of its upstream nodes of course but why in the world would the nodes (in this case the Kubernetes cluster) need to know the IP of the load balancer?

Indeed why the heck do I even need a Kubernetes resource/service of type “LoadBalancer “? Is it just and advertisement to some IPAM that “Hey, I need a load balancer, please write me back the IP when you’re done!”

This and other questions at:

By AGuyWhoJustDoesntGetKubernetesLoadBalancers

Moving up the learning curve on on Kubernetes and I find that the issue of how to deal with on-prem clusters with no external load balancer is one of the stickiest learning curves. Much confusion.

What are the tradeoffs of a MetalLB installation versus using Ingress objects?

MetalLB and Ingress aren’t competitors. Using the instructions above, you can set up your ingress to listen on a MetalLB IP address. This virtual address is able to float around your cluster. You set up your firewall to send web traffic to that IP address, which will always be there, rather than sending it to a single node’s IP.

Without MetalLB, you’d send web traffic to the ingress directly on a node’s IP address, which could go down for maintenance or other reasons, causing an outage. With MetalLB, that virtual address just moves to a different node, and the ingress continues to be reachable.

I’m sure I have simplified some details since I’m still learning this too, but my explanation matches my experience so far. I hope it makes sense.

This post is the first search result for “microk8s metallb”, but the repo doesn’t show up in the (first pages of) search results of google/duck/github.
Any reason not to add a link to the repo?

@sed-i Thanks. We’ve tended to include the ‘source’ to mean where the upstream project/source is located. Would you like a link to the add-on source too?

New to the K8S party. :grinning:

I used this setup on MicroK8s cluster, but it seems that my custom-addresspool only works if it matches the IP’s of my Nodes. Any IP that is not matching a live Node doesn’t work.

Not working
192.168.1.225-192.168.1.230:port (Nodes)
192.168.1.230-192.168.1.235:port (custom-addresspool )

working
192.168.1.225-192.168.1.230:port (Nodes)
192.168.1.225-192.168.1.230:port (custom-addresspool )

But if i shutdown node 192.168.1.225 I can’t reach my App
I’m a bit confused!?

│ default           hello-world-lb     LoadBalancer    10.152.183.237    192.168.1.230    8080►31019                                           17m        │
│ default           kubernetes         ClusterIP       10.152.183.1                       https:443►0                                          43h        │
│ kube-system       kube-dns           ClusterIP       10.152.183.10                      dns:53►0╱UDP dns-tcp:53►0 metrics:9153►0             4d1h       │
│ kube-system       metrics-server     ClusterIP       10.152.183.101                     https:443►0                                          4d1h       │
│ metallb-system    webhook-service    ClusterIP       10.152.183.231                     443►0                                                28m        │
│ portainer         portainer          NodePort        10.152.183.175                     http:9000►30777 https:9443►30779 edge:30776►30776    3d22h 

Solved:
Turned on L2 advertisement (for same subnet?)
Port was set to 8080 and I was trying reach it at port 80. :frowning_face: :face_with_spiral_eyes:

    - protocol: TCP
      port: 8080
      targetPort: 8080
1 Like

Note that currently this addon does not work under Multipass on macOS, due to filtering that macOS applies to network traffic.

Could someone please explain this in more detail, including perhaps references to more authoritative resources to know more? This is not helpful for those who aren’t intimately familiar with MacOS networking, “filtering,” and why this doesn’t work. I’m not even sure what to google to understand what the problem is, and searching for “macos network filtering” doesn’t help.

1 Like

Has the Multipass issue been resolved on macOS? Is there an issue where one track it?