Should I create VLAN on the router of the Kubernetes cluster made of Raspberry Pi 4-8GB?

I am in the process of building my own K8S cluster out of Raspberry pi 4-8GB and Raspberry pi 3, 5 of each. The K8S cluster has a router at its door, and an internal switch to access them all. I can configure the cluster with VLAN … but should I? My reasoning is as follows: From the outside of the cluster, we need access to the master, not the Nodes, right? So I could make two VLAN … one for the master and one for the nodes … I could lock the access to the node to only allow the master to access them and allow the nodes to access the master. Would that make sense?

The Master could be on on his own address The Nodes could be on to 29 on their own VLAN of The CIDR could be Opening only the port for the outside world to access the Master. Does that make sense?

1 Like

I feel locking down the nodes would be a great idea, as nothing but master should access the nodes. Can you post on any issues you run into, as if it works out I may just copy your setup

I will when I’ll understand more lol Learning ! Built the base with two power bars and 10 USB Wall chargers lol waiting for the racks to assemble the raspberrys …

First thing is to configure the router but I was asking if someone foresees any problem. Do the clients access the nodes directly ? or do they have to login to the master to get to the nodes ? I see the cluster as a black box with only one door … is this a correct way to view it ?

And yeah … I have to find some Cat 6 Ethernet cable of 6’’ lol

For clients they will use the IP, and ports of your master node. Your master node SHOULD have an ingress controller for easy URI setup

client -> master
            -> node
            -> node
            -> node

I also have another concern … in physics, some files are huge … say 2TB … it could be partitionned over many small SSD drive on each node of the cluster using NFS if the nodes mount every other SSD. They could then have access to the whole file still no ?

I am planing to add SSD drives to each node (512Gb) to get between 3TB and 5TB total disk for the cluster to be able to treat very big files. Then have each node mount the SSD of all the other nodes in the cluster, and let pods access a list of volumes ? Does that make sense ?

I’ll have to check about the Ingress controller …

I don’t think you can do this with k8s, but I may be wrong. Your workload will mount a persistent volume, which then you can read the files. I have no idea if you can union nodes SSD, but I would look into it. Let me know if you get it working!

Thinking of a DaemonSet defining a named global storage on each node defining a persistent volume on each node accessible with a specific name for each node … ok it defies the high availability but it allows to fetch a huge file (2TB) and distribute its content to the 512 GB SSD drives of each node … say 5 nodes … giving 480 GB to the named storage. Keep 32GB of storage for the nodes pods running that needs writing data to disk … using normal Kubernetes storage definition (either permanent or transient) and not the DaemonSet.

Then any pod’s container could access the local named storage to get data to read and analyse from the DaemonSet, and write to either permanent or transient storage defined in the container.

Does that make any sense ? Still reading … learning …

or I could use NFS ?

My knowledge on storage is limited but I imagine a 2tb file would be difficult for a node to handle. What are you doing, and can the data be broken up?

LOL in astrophysics some files are huges … in big data too … I modified the storage capacity … each raspberry pi 4-8gb will have its own 1TB SSD drive … still thinking about how to use it with intelligence lol

Almost done building part of it !

Oh yeah, 2 rpi4-8gb will have a Coral TPU accelerator (tensor flow) for good measure lol

I have another question … is it better to have a cluster with 5 pi 4 and 5 pi3 or 2 clusters, on with 5 pi 4 and another with 5 pi 3 ?

I want to have both pi 4 and pi3 to see the impact of lack of ressources on some nodes … thus the big cluster with a mixture of both … but I got to thinking it might be better to have 2 clusters instead of one … any thoughts ?

You might misunderstand by the way what Raspberry Pis orginally made for and they might fail for your K8s workload because of it, they are usually way too weak, if you want to use a RPI 4, then I would not say that, because that is pretty powerful, but anything lower will give you a very bad experience indeed. I tried to run a zabbix monitoring server on a RPI 3 and it kept having problems with the MySQL DB because it was just too slow and the kernel not optimized for a database workload like that.
K8s is built for big enterprise, not tiny little maker computer, these systems are on the very extremes of these scales.
Also, you will notice that the SDcards start giving problems, as they are not made for these kind of read / write workloads, I had my RPI 3 fail in a few days, having read / write errors due to that. They recommend to use SSDs attached via USB as the main drive.
I know that people attribute all things to RPIs and they are amazing, but K8s is the workload I would least ever put on a RPI. Just my 2 cents.

Thanks for the heads up for the raspberry pi 3 … I was looking at the SSD access speed improvements over the SD card … not much there … on the pi 4 major difference but not on the pi 3 … All raspberry pi of the cluster will have their SSD … I wanted to include the pi 3 to see how kubernetes deals with less performing nodes … but the SSD access time is making me rethink about it …

The goal of this cluster is not to run anything for long but mainly to help developers test before sending it to the real cluster … a mini cluster of 4 would have done the job but I do want to calculate a few things with it … therefore … 5 pi 4 and 5 pi 3 seemed ok … I’ll probably change the pi 3 for pi 4-8gb …

each pi has his SSD … right now all pi 4-8gb have a 1 TB ssd … thinking of splitting it say 128 gb for the system and the rest on a nfs … to load huge files locally … I have physics in mind !