I have a single-node cluster that has 2 pods for now, but it may grow up to 5 pods. As part of a hard business requirement, I have to run each of my apps in their own dedicated pods (i.e. not allowed to stuff multiple containers into the same pod, or multiple apps into the same container/image). I need to put in place a high-speed, low-latency means of inter-process communication between 2 pods initially, but might need to support up to 5 pods each talking directly to each other. The idea is to have one of the pods act as a “re-assmbler”, that takes data from all the other pods, plus its own data, and re-assembles data a specific way into a stream.
What methods could I use to provide something faster than IP-based connectivity between these pods, given that they’re all guaranteed to live on the same node? I’ve tried packet-based connectivity, and it’s just too slow for the amount of data I have to process (given the hard requirements I have).
Thoughts so far:
- Create a pair of named pipes (i.e. mkfifo) on the bare-metal OS, and expose it as a volume mount to the two pods, and they can both talk to each other via the pipes. Should be fast, and not too hard to synchronize. Becomes ugly with 5x pods though, as the number of pipes will grow (i.e. (n)(n-1)/2 == (5)(4)/2 == 10 pipes; and I have to figure out a sane way for pods to know which pipe sets to use for read-versus-write).
- Shared memory?
- Deploy redis or Memached, but I don’t know how the performance/throughput would scale compared to pipes or shared memory.
- Some other mechanism I haven’t considered?
Kubernetes version: 1.17.2
Cloud being used: bare-metal (kubeadm)
Installation method: apt-get
Host OS: Ubuntu Server 18.04 LTS x86_64
CNI and version: Flannel 0.3.1