This recipe lets you run Docker within Docker.
There is only one requirement: your Docker version should support the
--privileged flag.
Build the image:
docker build -t dind .Run Docker-in-Docker and get a shell where you can play, and docker daemon logs to stdout:
docker run --privileged -t -i dindRun Docker-in-Docker and get a shell where you can play, but docker daemon logs
into /var/log/docker.log:
docker run --privileged -t -i -e LOG=file dindRun Docker-in-Docker and expose the inside Docker to the outside world:
docker run --privileged -d -p 4444 -e PORT=4444 dindNote: when started with the PORT environment variable, the image will just
the Docker daemon and expose it over said port. When started without the
PORT environment variable, the image will run the Docker daemon in the
background and execute a shell for you to play.
You can use the DOCKER_DAEMON_ARGS environment variable to configure the
docker daemon with any extra options:
docker run --privileged -d -e DOCKER_DAEMON_ARGS="-D" dindIf you get a weird permission message, check the output of dmesg: it could
be caused by AppArmor. In that case, try again, adding an extra flag to
kick AppArmor out of the equation:
docker run --privileged --lxc-conf="lxc.aa_profile=unconfined" -t -i dindIf you get the warning:
WARNING: the 'devices' cgroup should be in its own hierarchy.
When starting up dind, you can get around this by shutting down docker and running:
# /etc/init.d/lxc stop
# umount /sys/fs/cgroup/
# mount -t cgroup devices 1 /sys/fs/cgroup
If the unmount fails, you can find out the proper mount-point with:
$ cat /proc/mounts | grep cgroup
The main trick is to have the --privileged flag. Then, there are a few things
to care about:
- cgroups pseudo-filesystems have to be mounted, and they have to be mounted with the same hierarchies than the parent environment; this is done by a wrapper script, which is setup to run by default;
/var/lib/dockercannot be on AUFS, so we make it a volume.
That's it.
Since AUFS cannot use an AUFS mount as a branch, it means that we have to
use a volume. Therefore, all inner Docker data (images, containers, etc.)
will be in the volume. Remember: volumes are not cleaned up when you
docker rm, so if you wonder where did your disk space go after nesting
10 Dockers within each other, look no further :-)
Outside: it will use your installed version.
Inside: the Dockerfile will retrieve the latest docker binary from
https://get.docker.io/; so if you want to include your own docker
build, you will have to edit it. If you want to always use your local
version, you could change the ADD line to be e.g.:
ADD /usr/bin/docker /usr/local/bin/docker
Yes. Note, however, that there seems to be a weird FD leakage issue.
To work around it, the wrapdocker script carefully closes all the
file descriptors inherited from the parent Docker and lxc-start
(except stdio). I'm mentioning this in case you were relying on
those inherited file descriptors, or if you're trying to repeat
the experiment at home.
Also, when you will be exiting a nested Docker, this will happen:
root@975423921ac5:/# exit
root@6b2ae8bf2f10:/# exit
root@419a67dfdf27:/# exit
root@bc9f450caf22:/# exit
jpetazzo@tarrasque:~/Work/DOTCLOUD/dind$At that point, you should blast Hans Zimmer's Dream Is Collapsing on your loudspeakers while twirling a spinning top.
