Cluster information:
Kubernetes version: 1.29.3
Cloud being used: bare-metal
Installation method: k3s via their install script on https://get.k3s.io and pre-made config in /etc/rancher/k3s
Host OS: Debian, arm64
CNI and version: Flannel (built-in)
CRI and version: Containerd (built-in)
k3s Version in full: v1.29.3+k3s1 (8aecc26b)
Hello there!
This is more a question of curiosity and a bit confusing to me. Basically, my cluster is going to be 5 nodes - three at home, and one remote on a VPS, which are all connected by a Headscale VPN.
So, this looks a little something like this:
internet <-> VPS (<-> vpn <->) node1,2,3 <-> homelab
The VPS is publicy reachable, whilst the other three nodes are only reachable from home.
And this is where I am a little confused:
# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
clusterboi Ready control-plane,etcd,master 4d5h v1.29.3+k3s1 192.168.1.3 192.168.1.3 Armbian 24.5.0-trunk.468 bookworm 6.8.7-edge-rockchip-rk3588 containerd://1.7.11-k3s2
^^^^^^^^^^^^^^^^^^^^^^^^^
The VPN’s CIDR is 100.64.0.0/24
whilst my home network is 192.168.1.0/24
. So, going off of the terminology here, my node’s internal IP should actually be it’s VPN address, and the external address it’s respective reachable endpoint (which is it’s homelab IP - or in the case of my VPS, it’s quite public IP).
But I am not sure if I am right. I tried changing it once, but etcd
complained that the node was not a member - when it clearly was, before the IP change. So I reverted, and it is fine now.
What are the correct values - and if I do need to change them, how do I do so without breaking etcd again?
Thanks and kind regards,
Ingwie