MetalLb Configuration on Oracle Cloud Infra. - No Route to Host in Controller Logs - STATE=CrashLoopBackOff

I am learning Kubernetes and this is my environment

  • Cloud: Oracle Cloud Infrastructure
  • VM: 02 ARM with 4 CPU and 12GB
    RAM.
  • OS: Ubuntu 20.04 Image Build 2023.60.30-0

Installed Kubernetes with the Following init parameters.
With Flannel Network Addon

 sudo kubeadm init phase certs all && 
 sudo kubeadm init phase kubeconfig all && 
 sudo kubeadm init phase control-plane all --pod-network-cidr 10.244.0.0/16 &&
 sudo sed -i 's/initialDelaySeconds: [0-9][0-9]/initialDelaySeconds: 240/g' /etc/kubernetes/manifests/kube-apiserver.yaml &&
 sudo sed -i 's/failureThreshold: [0-9]/failureThreshold: 18/g' /etc/kubernetes/manifests/kube-apiserver.yaml &&
 sudo sed -i 's/timeoutSeconds: [0-9][0-9]/timeoutSeconds: 20/g' /etc/kubernetes/manifests/kube-apiserver.yaml &&
 sudo kubeadm init \
   --v=1 \
   --skip-phases=certs,kubeconfig,control-plane \
   --ignore-preflight-errors=all \
   --pod-network-cidr 10.244.0.0/16  

I have Installed MetalLb using this with strictARP set to True

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.11/config/manifests/metallb-native.yaml

Created a Layer 2 Config Define The IPs To Assign To The Load Balancer Services with the following using the kubectl create -f

IPAddressPool.yaml

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - 10.0.0.20-10.0.0.70

and L2Advertisement

apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system

This is the output of kubectl get pods --all-namespaces -o wide

NAMESPACE        NAME                                  READY   STATUS              RESTARTS        AGE     IP           NODE            NOMINATED NODE   READINESS GATES
kube-flannel     kube-flannel-ds-9xtbs                 1/1     Running             3 (3h32m ago)   6h41m   10.0.0.222   kube-master     <none>           <none>
kube-flannel     kube-flannel-ds-d8v22                 1/1     Running             0               6h41m   10.0.0.96    kube-worker-1   <none>           <none>
kube-system      coredns-5dd5756b68-ftm87              0/1     Running             0               7h12m   10.244.1.2   kube-worker-1   <none>           <none>
kube-system      coredns-5dd5756b68-h5tsz              0/1     Running             0               7h12m   10.244.1.3   kube-worker-1   <none>           <none>
kube-system      etcd-kube-master                      1/1     Running             3 (3h32m ago)   7h12m   10.0.0.222   kube-master     <none>           <none>
kube-system      kube-apiserver-kube-master            1/1     Running             3 (3h32m ago)   7h12m   10.0.0.222   kube-master     <none>           <none>
kube-system      kube-controller-manager-kube-master   1/1     Running             3 (3h32m ago)   7h12m   10.0.0.222   kube-master     <none>           <none>
kube-system      kube-proxy-kkdjx                      1/1     Running             0               6h42m   10.0.0.96    kube-worker-1   <none>           <none>
kube-system      kube-proxy-l9p6f                      1/1     Running             3 (3h32m ago)   7h12m   10.0.0.222   kube-master     <none>           <none>
kube-system      kube-scheduler-kube-master            1/1     Running             3 (3h32m ago)   7h12m   10.0.0.222   kube-master     <none>           <none>
metallb-system   controller-7d56b4f464-fgszp           0/1     CrashLoopBackOff    62 (2m8s ago)   4h54m   10.244.1.5   kube-worker-1   <none>           <none>
metallb-system   speaker-85nxj                         0/1     ContainerCreating   0               4h54m   10.0.0.222   kube-master     <none>           <none>
metallb-system   speaker-96pb9                         0/1     ContainerCreating   0               4h54m   10.0.0.96    kube-worker-1   <none>           <none>

Node Status

NAME            STATUS   ROLES           AGE     VERSION
kube-master     Ready    control-plane   7h25m   v1.28.2
kube-worker-1   Ready    <none>          6h54m   v1.28.2

output of kubectl -n metallb-system get all

NAME                              READY   STATUS              RESTARTS        AGE
pod/controller-7d56b4f464-fgszp   0/1     CrashLoopBackOff    64 (5m7s ago)   5h7m
pod/speaker-85nxj                 0/1     ContainerCreating   0               5h7m
pod/speaker-96pb9                 0/1     ContainerCreating   0               5h7m

NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/webhook-service   ClusterIP   10.104.48.220   <none>        443/TCP   5h7m

NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/speaker   2         2         0       2            0           kubernetes.io/os=linux   5h7m

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/controller   0/1     1            0           5h7m

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/controller-7d56b4f464   1         1         0       5h7m

This is the logs of the metallb-system controller using command kubectl logs controller-7d56b4f464-fgszp --namespace=metallb-system which says no route to Host.

{"branch":"dev","caller":"main.go:155","commit":"dev","goversion":"gc / go1.19.5 / arm64","level":"info","msg":"MetalLB controller starting version 0.13.11 (commit dev, branch dev)","ts":"2023-09-29T15:17:51Z","version":"0.13.11"}

{"level":"error","ts":"2023-09-29T15:17:51Z","msg":"Failed to get API Group-Resources","error":"Get \"https://10.96.0.1:443/api?timeout=32s\": dial tcp 10.96.0.1:443: connect: no route to host","stacktrace":"sigs.k8s.io/controller-runtime/pkg/cluster.New\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.6/pkg/cluster/cluster.go:161\nsigs.k8s.io/controller-runtime/pkg/manager.New\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.6/pkg/manager/manager.go:351\ngo.universe.tf/metallb/internal/k8s.New\n\t/go/go.universe.tf/metallb/internal/k8s/k8s.go:126\nmain.main\n\t/go/go.universe.tf/metallb/main.go:207\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250"}


{"level":"error","ts":"2023-09-29T15:17:51Z","logger":"setup","msg":"unable to start manager","error":"Get \"https://10.96.0.1:443/api?timeout=32s\": dial tcp 10.96.0.1:443: connect: no route to host","stacktrace":"go.universe.tf/metallb/internal/k8s.New\n\t/go/go.universe.tf/metallb/internal/k8s/k8s.go:147\nmain.main\n\t/go/go.universe.tf/metallb/main.go:207\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250"}

These are my Routes using the ip route command

default via 10.0.0.1 dev enp0s6
default via 10.0.0.1 dev enp0s6 proto dhcp src 10.0.0.222 metric 100
10.0.0.0/24 dev enp0s6 proto kernel scope link src 10.0.0.222
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
169.254.0.0/16 dev enp0s6 scope link
169.254.0.0/16 dev enp0s6 proto dhcp scope link src 10.0.0.222 metric 100

Tried to add the route with this command route add 10.104.48.220 gw 10.0.0.222 but yet no luck…

Any suggestions are welcomed…

Hey @ShanerWarner did you manage to run the cluster on Oracle Arm Servers or not? I am also facing the very same issue. Any help would be appreciated.