@@ -25,40 +25,22 @@ the image you intend to use_ via
25
25
As an example, if you intend on using the ` cuda-10.1 ` image then setting up
26
26
CUDA 10.1 or CUDA 10.2 should ensure that you have the correct graphics drivers.
27
27
28
- You will also need to install ` nvidia-docker2 ` to enable GPU device access
29
- within Docker containers. This can be found at
28
+ You will also need to install the NVIDIA Container Toolkit to enable GPU device
29
+ access within Docker containers. This can be found at
30
30
[ NVIDIA/nvidia-docker] ( https://github.com/NVIDIA/nvidia-docker ) .
31
31
32
32
33
33
### Prebuilt images
34
34
35
35
Prebuilt images are available on Docker Hub under the name
36
- [ anibali/pytorch] ( https://hub.docker.com/r/anibali/pytorch/ ) . For example,
37
- you can pull the CUDA 10.1 version with:
36
+ [ anibali/pytorch] ( https://hub.docker.com/r/anibali/pytorch/ ) .
37
+
38
+ For example, you can pull an image with PyTorch 1.5.0 and CUDA 10.2 using:
38
39
39
40
``` bash
40
- $ docker pull anibali/pytorch:cuda-10.1
41
+ $ docker pull anibali/pytorch:1.5.0-cuda10.2
41
42
```
42
43
43
- The table below lists software versions for each of the currently supported
44
- Docker image tags available for ` anibali/pytorch ` .
45
-
46
- | Image tag | CUDA | PyTorch |
47
- | -------------| -------| ---------|
48
- | ` no-cuda ` | None | 1.4.0 |
49
- | ` cuda-10.1 ` | 10.1 | 1.4.0 |
50
- | ` cuda-9.2 ` | 9.2 | 1.4.0 |
51
-
52
- The following images are also available, but are deprecated.
53
-
54
- | Image tag | CUDA | PyTorch |
55
- | -------------| -------| ---------|
56
- | ` cuda-10.0 ` | 10.0 | 1.2.0 |
57
- | ` cuda-9.1 ` | 9.1 | 0.4.0 |
58
- | ` cuda-9.0 ` | 9.0 | 1.0.0 |
59
- | ` cuda-8.0 ` | 8.0 | 1.0.0 |
60
- | ` cuda-7.5 ` | 7.5 | 0.3.0 |
61
-
62
44
63
45
### Usage
64
46
@@ -71,33 +53,26 @@ the following command:
71
53
72
54
``` sh
73
55
docker run --rm -it --init \
74
- --runtime=nvidia \
56
+ --gpus=all \
75
57
--ipc=host \
76
58
--user=" $( id -u) :$( id -g) " \
77
59
--volume=" $PWD :/app" \
78
- -e NVIDIA_VISIBLE_DEVICES=0 \
79
60
anibali/pytorch python3 main.py
80
61
```
81
62
82
63
Here's a description of the Docker command-line options shown above:
83
64
84
- * ` --runtime=nvidia ` : Required if using CUDA, optional otherwise. Passes the
85
- graphics card from the host to the container.
65
+ * ` --gpus=all ` : Required if using CUDA, optional otherwise. Passes the
66
+ graphics cards from the host to the container. You can also more precisely
67
+ control which graphics cards are exposed using this option (see documentation
68
+ at https://github.com/NVIDIA/nvidia-docker ).
86
69
* ` --ipc=host ` : Required if using multiprocessing, as explained at
87
70
https://github.com/pytorch/pytorch#docker-image .
88
71
* ` --user="$(id -u):$(id -g)" ` : Sets the user inside the container to match your
89
72
user and group ID. Optional, but is useful for writing files with correct
90
73
ownership.
91
74
* ` --volume="$PWD:/app" ` : Mounts the current working directory into the container.
92
75
The default working directory inside the container is ` /app ` . Optional.
93
- * ` -e NVIDIA_VISIBLE_DEVICES=0 ` : Sets an environment variable to restrict which
94
- graphics cards are seen by programs running inside the container. Set to ` all `
95
- to enable all cards. Optional, defaults to all.
96
-
97
- You may wish to consider using [ Docker Compose] ( https://docs.docker.com/compose/ )
98
- to make running containers with many options easier. At the time of writing,
99
- only version 2.3 of Docker Compose configuration files supports the ` runtime `
100
- option.
101
76
102
77
#### Running graphical applications
103
78
@@ -121,7 +96,7 @@ example:
121
96
122
97
``` sh
123
98
docker run --rm -it --init \
124
- --runtime=nvidia \
99
+ --gpus=all \
125
100
-e " DISPLAY" --volume=" /tmp/.X11-unix:/tmp/.X11-unix:rw" \
126
101
anibali/pytorch python3 -c " import tkinter; tkinter.Tk().mainloop()"
127
102
```
0 commit comments