I have a cluster with 6 nodes, 3 of them are publicly exposed and 3 are behind firewalls. I want to set up external-dns
but am hitting an issue where the IP address listed are of the on-prem nodes which cannot be accessed from outside.
root@kube ~# kubectl get ing -A
root@kube ~ [1]# kubectl get ing -A
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
kubernetes-dashboard dashboard-ingress <none> kube-dashboard.example.com 100.64.0.12 80 37m
longhorn-system longhorn-ingress <none> kube-longhorn.example.com 100.64.0.12 80 10h
vaultwarden vaultwarden-ingress <none> vault.example.com 100.64.0.12 80 3h6m
This is because 100.64.0.12 is the first in the list for the ingress controller
root@kube ~# kubectl get services -n caddy-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mycaddy-caddy-ingress-controller LoadBalancer 10.43.237.202 100.64.0.12,100.64.0.13,100.64.0.6,<redacted> 80:32386/TCP,443:32125/TCP 16m
I tried patching the external IPs for the service
#!/bin/bash
set -e
IPS=$(kubectl get nodes -l public=true \
-o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}' \
| tr ' ' '\n' | jq -R . | jq -s .)
echo "Setting caddy external IPs to $IPS"
kubectl patch service mycaddy-caddy-ingress-controller -n caddy-system \
--type=json -p="[{\"op\": \"replace\", \"path\": \"/spec/externalIPs\", \"value\": ${IPS}}]"
But instead it just appends to the list rather than replacing them.
So I suppose my question is, how do I either prevent private addresses from being listed as the service external IPs, or make sure external-dns doesn’t use those values but instead another list retrieved through a script?
Cluster information:
Kubernetes version: v1.32.3+k3s1
Cloud being used: Bare metal
Installation method:
Host OS: Rocky Linux (9.5)
CNI and version: Flannel wireguard
CRI and version: Containerd