Kubernetes: Basic Cluster Setup on Raspberry Pi Issues

Hello everyone. I apologize in advance if this is trivial as I am an undergrad and quite new to kubernetes, raspberry pis, and networking in general. I only included the fact I am working on Raspberry Pi for reference, as I do not think the issue I am having is related.

The overall goal is to setup a k3s cluster where the server is hosted on one raspberry pi and all other pis are nodes in the cluster. I am generally trying to followthis tutorial.

Here is relevant information to my networking setup before installing k3s:

  • 2 raspberry pis with hostnames red and red001 (I intend for red001 to be a node in red’s server)
  • I am using red as a router for red001. ie. I have setup red as a wired access point for red001. That is, red is connected to my internet directly and red001 is connected to the internet by connecting to red. For reference, red is connected to the internet via eth1 and is connected to a network switch (that is connected to red001) via eth0. So I have configured iPv4 forwarding using iptables to route traffic from eth0 to eth1 and vice versa.
  • Private (eth0) network has an IP range of 169.254.0.0 - 169.254.255.255 (ie. netmask 255.255.0.0)
  • red has IP 169.254.150.87
  • red001 has static IP 169.254.10.1
  • Additionally, I am using a static ip for red001 and, as such, have disabled the dhcp client daemon on red001.
  • I have verified a working connection by using curl and ping commands on red001.
  • Both red and red001 are in each other’s /etc/hosts (ie. I can ssh into red from red001 and I can ssh into red001 from red)

So for actually setting up k3s, I have been trying to follow the process described in the link above. I followed all steps (with the exception of the k3sup update step), but I am having trouble connecting red001 to the server.

Specifically, when I try to run

sudo k3s agent --server ${K3S_URL} --token ${K3S_TOKEN}

on red001 with

K3S_URL = 'https://red:6443'

and K3S_TOKEN equal to the token given in /var/lib/rancher/k3s/server/node-token of red, I am met with the following output:

INFO[2020-04-12T14:43:40.167158119-04:00] Starting k3s agent v1.17.4+k3s1 (3eee8ac3)   
INFO[2020-04-12T14:43:40.168078475-04:00] module overlay was already loaded            
INFO[2020-04-12T14:43:40.168178317-04:00] module nf_conntrack was already loaded       
INFO[2020-04-12T14:43:40.168234827-04:00] module br_netfilter was already loaded       
INFO[2020-04-12T14:43:40.169668407-04:00] Running load balancer 127.0.0.1:43169 -> [red:6443] 
ERRO[2020-04-12T14:43:40.858464467-04:00] unable to select an IP from default routes.  
ERRO[2020-04-12T14:43:46.124906066-04:00] unable to select an IP from default routes.  

My initial thought was there must be a problem with the routes I set up on red001
So I checked the configuration using route -n and got this:

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         169.254.150.87  0.0.0.0         UG    0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     0      0        0 eth0

But this makes sense to me. First row tells me that the IP address of red is the default gateway, which is what we want right? The second row tells me it can communicate with any device connected to the network switch. Makes sense, but I don’t feel it’s quite relevant for this.

I’m wondering if maybe the issue is cause by the way I setup the iptables, but I don’t see how I could get this far if there was something wrong. Because surely if I set something wrong, then doing things like curl -sfL https://get.k3s.io | sh - would cause an issue. But nothing went wrong that way.

I’ve been trying to figure this issue out for a while. I’ve explored several different options, but none have worked. I suppose that because this error can be raised in a variety of circumstances, trying to pinpoint the exact cause is a bit tricky.

Does anyone have any ideas on what is wrong? Did my iptable routing on red maybe mess something up? Thank you in advance for all your help!