Running minikube in docker-in-docker

I’m trying to get minikube up and running inside a docker-in-docker image for integration-testing purposes. The general idea is to be able to have a container-image in gitlab containing a minikube, which can be spun up and populated from a kubectl externally from the container.

Currently, I am able to build the container, based on docker:dind and spin up minikube. I can access minikube inside the container, but I can not access it from the host (outside the container). It feels like I am pretty close, and just overlooks something obvious.

I have made a small example of what I am trying to achieve. In the example, root is running minikube, but this is just to keep the example short. A brief step by step of what I do:

  1. Creating a docker-image based on docker:dind, populated with minikube and kubectl
  2. Starting the container in privileged daemon-mode, exposing a port and with a mounted directory
  3. Starting minikube in docker exec and ensuring that kubectl works
  4. Copying all relevant files for the kubeconfig to the mounted dir to make them accessible from the host
  5. Ensuring the the kubeconfig works from inside the container
  6. Trying to use the kubeconfig from the host - this fails.

Any help and pointers a very much appreciated.

It should be possible to copy and paste this script into a file and make it run. If not, please let me know.

#!/usr/bin/env bash

# 1. Create docker image

# The port seems a little flaky. Sometimes it is 49154 and sometimes it is 32769.
KUBECTL_PORT=32769

cat > Dockerfile <<EOF
FROM docker:dind

RUN apk add bash
# If the docker-group is created, /var/run/docker.sock will be in that group....
RUN addgroup docker

RUN wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
RUN install minikube-linux-amd64 /usr/local/bin/minikube
RUN wget https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl -O kubectl
RUN chmod +x /kubectl
RUN mv kubectl /usr/local/bin

EXPOSE $KUBECTL_PORT

EOF


CONTAINER_TAG="minikube:dind"
CONTAINER_NAME="minikube_in_dind"

docker build -t $CONTAINER_TAG -f Dockerfile .

# 2. Start container

# minikubes kubectl is exposed on port 49154
docker run -d --privileged --name $CONTAINER_NAME -p $KUBECTL_PORT:$KUBECTL_PORT -v $PWD/data:/data $CONTAINER_TAG

# 3. Start minikube

docker exec -i $CONTAINER_NAME /bin/bash <<EOF
# Wait for docker to be up and running...
while ! docker ps
do
    echo "Waiting for docker daemon"
    sleep 1
done

# start minikube allowing for remote access
# (see https://github.com/kubernetes/minikube/issues/14364)
minikube start --force --apiserver-ips=`hostname -i` --listen-address=0.0.0.0

# ensure kubectl works
kubectl get pods --all-namespaces

# 4. Copy necessary kubectl-files to /data to allow them
# to be used from the host

cp /root/.kube/config /data/.
cp /root/.minikube/ca.crt /data/.
cp /root/.minikube/profiles/minikube/client.crt /data/.
cp /root/.minikube/profiles/minikube/client.key /data/.
# Change paths
sed -i 's/\/root\/\.minikube\/profiles\/minikube\///g' /data/config
sed -i 's/\/root\/\.minikube\///g' /data/config
# Change server:
sed -i 's/192.168.49.2:8443/localhost:'"$KUBECTL_PORT"'/g' /data/config

# Need to make the files readable to access them from the host
chmod o+r /data/*

# 5. Test kubeconfig inside container

# Test the newly created kubeconfig
kubectl --kubeconfig /data/config get pods --all-namespaces

EOF

# 6. Test kubeconfig from host

# Now I would expect to be able to remotely access kubectl inside the container
# using the kubeconfig in data/config:

echo "Calling kubectl from host..."
kubectl --kubeconfig data/config get pods --all-namespaces

# But it just hangs :(

# To stop and remove the containername:
# docker stop $CONTAINER_NAME ; docker rm $CONTAINER_NAME

It looks like you need to map the KUBECTL_PORT to 8443 when running the “docker run” command.

Hi Hasan.

Thanks for your reply.

I do set the kubectl-port when running “docker run”, but I set it to whatever I set $KUBECTL_PORT to. When using “–listen-address=0.0.0.0” the ports are remapped. 8443 is most often mapped to 49154, but sometimes to 32769. I haven’t figured out the pattern yet.
The line you are referring to, I substitute the internal minikube ip and port (8443) to localhost and the given (assumed) $KUBECTL_PORT.

This issue was quite challenging and had my whole day. I figured out whats happening and I’m still investigation why it is happening particularly in DIND scenario. Meanwhile here is the working solution that could help you with the progress.

Once the minikube is installed & started, you need to perform a port forwarding using socat as a background process and then publish this port in the docker run command.

socat TCP-LISTEN:8443,fork TCP:127.0.0.1:32769 &

In the above command, the destination ip must be of the loopback interface else it wont work. In normal scenario, the connection is being established with dind-container-ip:32769 and due to iptables issue(which I’m not sure) it isn’t working.

docker run -d --privileged --name $CONTAINER_NAME -p $KUBECTL_PORT:8443 -v $PWD/data:/data $CONTAINER_TAG

The reason why loopback interface ips are working is because of the docker-proxy. Here is the doc

You also need to update the server ip/port to the socat LISTEN port.

sed -i 's/192.168.49.2:8443/127.0.0.1:8443'/g' /data/config

DAY 2 of investigating this issue. This scenario is more like the Inception movie i would say where the minikube-dind is running in dind which is running in Host Docker. :smile:

The issue i had with my setup is, the docker network CIDR of minikube-dind was overlapping with the Host docker network (Highlighted below).

  1. Containers in Host Docker are being assigned from CIDR 172.17.0.0/16
  2. In DIND container running in host docker
    a. eth0 gets ip range from 172.17.0.0/16
    b. docker0 interface created with CIDR 172.18.0.0/16
    c. br-xxxyyxx bridge interface created by minikube with CIDR 192.168.49.0/24
  3. In minikube-dind container running inside DIND container
    a. eth0 gets ip range from above bridge interface i.e 192.168.49.0/24
    b. docker0 interface created with CIDR 172.17.0.0/16

Any packet that is being sent to the “KUBECTL_PORT” exposed in “docker run” command has the source ip of 172.17.0.1 which is my host docker network address and the destination ip of 172.17.0.2 which is the DIND container’s IP. Below packet capture is from DIND container,

45fb816f2bc9:~# tcpdump -i eth0
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
20:06:52.722038 IP 172.17.0.1.61578 > 45fb816f2bc9.32769: Flags [S], seq 3782279993, win 65495, options [mss 65495,sackOK,TS val 3324249238 ecr 0,nop,wscale 7], length 0
45fb816f2bc9:~#
45fb816f2bc9:~# ping 45fb816f2bc9
PING 45fb816f2bc9 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.062 ms

At the same time, on the minikube-dind container, the source ip of the packet is still being reference as 172.17.0.1 but the destination ip has been updated to 192.168.49.2 which is the minikube-dind container’s IP. This update happens due to the DNAT rule on DIND container

root@minikube:/# tcpdump -i eth0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
20:06:52.722065 IP 172.17.0.1.61578 > minikube.8443: Flags [S], seq 3782279993, win 65495, options [mss 65495,sackOK,TS val 3324249238 ecr 0,nop,wscale 7], length 0
root@minikube:/# 
root@minikube:/# ping minikube
PING minikube (192.168.49.2) 56(84) bytes of data.
64 bytes from minikube (192.168.49.2): icmp_seq=1 ttl=64 time=0.042 ms
Existing DNAT iptable rule in DIND container
-A DOCKER ! -i br-87ba89cef810 -p tcp -m tcp --dport 32769 -j DNAT --to-destination 192.168.49.2:8443

As mentioned earlier, due to the overlapping of CIDRs, the response which the minikube-dind sends is being routed to the internal 172.17.0.1/16 network instead of the host docker network. To avoid this, we need to perform NAT on the source ip. So by adding the following rule, the packet is now being routed back properly from the minikube-dind container.

Adding SNAT rule in DIND container on outgoing bridge interface
-A POSTROUTING -s 172.17.0.0/16 -o br-87ba89cef810 -j MASQUERADE
45fb816f2bc9:~# tcpdump -i eth0
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
20:26:02.400807 IP 172.17.0.1.55964 > 45fb816f2bc9.32769: Flags [S], seq 1384876156, win 65495, options [mss 65495,sackOK,TS val 3325398874 ecr 0,nop,wscale 7], length 0
20:26:02.401005 IP 45fb816f2bc9.32769 > 172.17.0.1.55964: Flags [S.], seq 3277585129, ack 1384876157, win 65160, options [mss 1460,sackOK,TS val 3194432896 ecr 3325398874,nop,wscale 7], length 0
root@minikube:/# tcpdump -i eth0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
20:26:02.400837 IP host.minikube.internal.55964 > minikube.8443: Flags [S], seq 1384876156, win 65495, options [mss 65495,sackOK,TS val 3325398874 ecr 0,nop,wscale 7], length 0
20:26:02.400981 IP minikube.8443 > host.minikube.internal.55964: Flags [S.], seq 3277585129, ack 1384876157, win 65160, options [mss 1460,sackOK,TS val 3194432896 ecr 3325398874,nop,wscale 7], length 0
root@minikube:/#    
root@minikube:/# ping host.minikube.internal
PING host.minikube.internal (192.168.49.1) 56(84) bytes of data.
64 bytes from host.minikube.internal (192.168.49.1): icmp_seq=1 ttl=64 time=0.073 ms

The conclusion here would be, either follow the solution mentioned in the above post using socat with destination ip as 127.0.0.1 so that docker-proxy would do the things for us or append the above mentioned SNAT iptable rule in DIND container for the iptable to perform the operation.

Interesting issue BTW :v:

Sorry about the slow feedback. Christmas occurred. :smile:

I just want to thank you for the huge effort you have been putting in to this. Thanks a lot.

I think the right thing to do would probably be setting up the right iptable rules, but the easy fix is using socat. I have tested the socat-solution, and it works like a charm :slight_smile:

Again, thanks a lot.

1 Like