Hi guys, I would like to create tuntap interace inside a pod because the main objetive of this is communicate two pods via tun interface (the apps that run there expect to communicate via tun interface). I have the following Dockerfile as an example to try this:
Dockerfile
# Defino la imagen base que usará el constructor
FROM ubuntu:focal as builder
# admito una instalación de forma no interactiva
ENV DEBIAN_FRONTEND=noninteractive
# Versionamiento
LABEL Name=ueransim Version=0.0.2
#Instalación de dependencias
RUN apt-get update && apt-get install -y wget \
make \
gcc \
g++ \
libsctp-dev \
lksctp-tools \
tcpdump \
git \
nano \
iproute2 \
iptables \
net-tools \
ifupdown \
iputils-ping \
libssl-dev
# con snap instalamos make
RUN mkdir ~/temp && \
cd ~/temp && \
wget https://cmake.org/files/v3.20/cmake-3.20.0.tar.gz && \
tar -xzvf cmake-3.20.0.tar.gz && \
cd cmake-3.20.0/ && \
./bootstrap && \
make -j `nproc` && \
make install && ldconfig && \
cmake --version
# clonamos el repositorio de UERANSIM
RUN git clone https://github.com/aligungr/UERANSIM && \
cd UERANSIM && \
make -j `nproc`
# Construimos la imagen final
FROM ubuntu:focal
# admito una instalación de forma no interactiva
ENV DEBIAN_FRONTEND=noninteractive
# Mantenemos las dependencias en nuestra imagen final
RUN apt-get update && apt-get install -y --no-install-recommends \
sudo \
libsctp-dev \
lksctp-tools \
nano \
netbase \
iproute2 \
iptables \
net-tools \
ifupdown \
iputils-ping \
iperf3 \
libssl-dev \
net-tools \
systemd \
netbase \
pkg-config \
tcpdump \
openssh-server && apt-get autoremove -y && apt-get autoclean
# Copiamos los archivos compilados del constructor a la imagen final
COPY --from=builder /UERANSIM/build /UERANSIM/build
COPY --from=builder /UERANSIM/config /UERANSIM/config
# Creamos el usuario SSH
RUN useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo -u 1000 ubuntu
RUN echo 'ubuntu:ubuntu' | chpasswd && adduser ubuntu sudo
RUN mkdir /var/run/sshd
EXPOSE 22
# Movemos los directorios de UERANSIM a /home/ubuntu
RUN mv /UERANSIM /home/ubuntu
# Terminamos con una instancia de bash que ejecute el container con el servicio SSH
CMD ["/usr/sbin/sshd", "-D"]
After that using helm I deployed the image as a pod with following manifest:
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "netnode.fullname" . }}
namespace: {{.Release.Namespace | quote}}
labels:
{{- include "netnode.labels" . | nindent 4 }}
spec:
selector:
matchLabels:
{{- include "netnode.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "netnode.selectorLabels" . | nindent 8 }}
spec:
{{- if .Values.nodeSelector }}
## Para especificar a qué nodo el POD será asignado
nodeSelector: {{- include "common.tplvalues.render" ( dict "value" .Values.nodeSelector "context" $) | nindent 8 }}
{{- end }}
imagePullSecrets:
{{- toYaml .Values.image.pullSecrets | nindent 8 }}
containers:
- name: ue
image: "{{ .Values.image.repository }}:{{ .Values.image.tag}}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
privileged: true
capabilities:
add: ["NET_ADMIN"]
command: ["/bin/bash"]
args: ["/entrypoint.sh"]
ports:
- name: mgm-connection
containerPort: 22
protocol: TCP
- name: gnb-ue
containerPort: {{ .Values.service.port}}
protocol: UDP
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: config
mountPath: /home/ubuntu/UERANSIM/config/ue.yaml
subPath: "ue.yaml"
- name: config
mountPath: /entrypoint.sh
subPath: "entrypoint.sh"
volumes:
- name: config
configMap:
name: {{ include "netnode.fullname" . }}-configmap
defaultMode: 0777
The “entrypoint.sh” file consist of this:
entrypoint.sh
#!/bin/bash
ip tuntap add name demotun mode tun
ip link set demotun up
echo "Setting IP to device"
ip add add {{ .Values.ip_netbase}} dev demotun;
sysctl -w net.ipv4.ip_forward=1;
And after launching the pod using helm I get this using “kubectl logs ”:
Logs for pod
And the following pod description:
Describe output
I already tried the following actions unsuccessfully:
- Check if some tun device is on the worker but my workers only have the default k8s configuration:
Similar Output for worker2 (2 nodes cluster) - If I don’t run the command during the installation after that the TUN device doesn’t work (executing the bash script after the pod was created)
- I check if my workers resources was full but I already saw that no other pods was launched in the cluster.
I hope you can help me, I would be infinitely grateful