You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Excuse my complete ignorance. I'm quite new to the current NVIDIA containers.
I need to deposit my own scripts inside a cuda-enabled container, meaning a dedicated conda environment, mapping my own volumes, adjusting my own settings and setting my entrypoints: not just run some binary file with docker run.
This was possible by using a Dockerfile with "FROM nvidia/cuda:10.2-runtime-ubuntu20.04" for example. But now it seems deprecated, and the containers ship from nvidia-container-toolkit.
Is it still possible to build a cuda-enabled container using my own Dockerfile?. Or i'd have to use nvidia-container-toolkit and deposit stuff on the containers it creates?
The text was updated successfully, but these errors were encountered:
Hello there,
Excuse my complete ignorance. I'm quite new to the current NVIDIA containers.
I need to deposit my own scripts inside a cuda-enabled container, meaning a dedicated conda environment, mapping my own volumes, adjusting my own settings and setting my entrypoints: not just run some binary file with docker run.
This was possible by using a Dockerfile with "FROM nvidia/cuda:10.2-runtime-ubuntu20.04" for example. But now it seems deprecated, and the containers ship from nvidia-container-toolkit.
Is it still possible to build a cuda-enabled container using my own Dockerfile?. Or i'd have to use nvidia-container-toolkit and deposit stuff on the containers it creates?
The text was updated successfully, but these errors were encountered: