Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: Unknown CUDA arch (8.0) or GPU not supported #53

Open
HALaser opened this issue Jan 27, 2021 · 1 comment
Open

ValueError: Unknown CUDA arch (8.0) or GPU not supported #53

HALaser opened this issue Jan 27, 2021 · 1 comment

Comments

@HALaser
Copy link

HALaser commented Jan 27, 2021

File "setup.py", line 70, in <module>
   cmdclass={"build_ext": torch.utils.cpp_extension.BuildExtension},
 File "/home/dgxadmin/miniconda3/envs/ZoomSloMo/lib/python3.7/site-packages/setuptools/__init__.py", line 153, in setup
   return distutils.core.setup(**attrs)
 File "/home/dgxadmin/miniconda3/envs/ZoomSloMo/lib/python3.7/distutils/core.py", line 148, in setup
   dist.run_commands()
 File "/home/dgxadmin/miniconda3/envs/ZoomSloMo/lib/python3.7/distutils/dist.py", line 966, in run_commands
   self.run_command(cmd)
 File "/home/dgxadmin/miniconda3/envs/ZoomSloMo/lib/python3.7/distutils/dist.py", line 985, in run_command
   cmd_obj.run()
 File "/home/dgxadmin/miniconda3/envs/ZoomSloMo/lib/python3.7/distutils/command/build.py", line 135, in run
   self.run_command(cmd_name)
 File "/home/dgxadmin/miniconda3/envs/ZoomSloMo/lib/python3.7/distutils/cmd.py", line 313, in run_command
   self.distribution.run_command(command)
 File "/home/dgxadmin/miniconda3/envs/ZoomSloMo/lib/python3.7/distutils/dist.py", line 985, in run_command
   cmd_obj.run()
 File "/home/dgxadmin/miniconda3/envs/ZoomSloMo/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run
   _build_ext.run(self)
 File "/home/dgxadmin/miniconda3/envs/ZoomSloMo/lib/python3.7/distutils/command/build_ext.py", line 340, in run
   self.build_extensions()
 File "/home/dgxadmin/miniconda3/envs/ZoomSloMo/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 372, in build_extensions
   build_ext.build_extensions(self)
 File "/home/dgxadmin/miniconda3/envs/ZoomSloMo/lib/python3.7/distutils/command/build_ext.py", line 449, in build_extensions
   self._build_extensions_serial()
 File "/home/dgxadmin/miniconda3/envs/ZoomSloMo/lib/python3.7/distutils/command/build_ext.py", line 474, in _build_extensions_serial
   self.build_extension(ext)
 File "/home/dgxadmin/miniconda3/envs/ZoomSloMo/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 196, in build_extension
   _build_ext.build_extension(self, ext)
 File "/home/dgxadmin/miniconda3/envs/ZoomSloMo/lib/python3.7/distutils/command/build_ext.py", line 534, in build_extension
   depends=ext.depends)
 File "/home/dgxadmin/miniconda3/envs/ZoomSloMo/lib/python3.7/distutils/ccompiler.py", line 574, in compile
   self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
 File "/home/dgxadmin/miniconda3/envs/ZoomSloMo/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 288, in unix_wrap_compile
   "'-fPIC'"] + cflags + _get_cuda_arch_flags(cflags)
 File "/home/dgxadmin/miniconda3/envs/ZoomSloMo/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1027, in _get_cuda_arch_flags
   raise ValueError("Unknown CUDA arch ({}) or GPU not supported".format(arch))
ValueError: Unknown CUDA arch (8.0) or GPU not supported

Error comes from missing arch for compute=80 in pytorch=1.4.0 version (got it from HighResSlowMo.ipynb )
see
/lib/python3.7/site-packages/torch/utils/cpp_extension.py -> def _get_cuda_arch_flags(cflags=None):

--> just give -gencode=arch=compute_XX,code=sm_XX with compile args in setup.py
--> in my case: "-gencode=arch=compute_75,code=sm_75",
-->in /codes/models/modules/DCNv2/setup.py

def get_extensions():
    this_dir = os.path.dirname(os.path.abspath(__file__))
    extensions_dir = os.path.join(this_dir, "src")

    main_file = glob.glob(os.path.join(extensions_dir, "*.cpp"))
    source_cpu = glob.glob(os.path.join(extensions_dir, "cpu", "*.cpp"))
    source_cuda = glob.glob(os.path.join(extensions_dir, "cuda", "*.cu"))

    sources = main_file + source_cpu
    extension = CppExtension
    extra_compile_args = {"cxx": []}
    define_macros = []

    if torch.cuda.is_available() and CUDA_HOME is not None:
        extension = CUDAExtension
        sources += source_cuda
        define_macros += [("WITH_CUDA", None)]
        extra_compile_args["nvcc"] = [
            "-DCUDA_HAS_FP16=1",
            "-gencode=arch=compute_75,code=sm_75",
            "-D__CUDA_NO_HALF_OPERATORS__",
            "-D__CUDA_NO_HALF_CONVERSIONS__",
            "-D__CUDA_NO_HALF2_OPERATORS__",
        ]
    else:
        raise NotImplementedError('Cuda is not availabel')

Its an torch-version-issue

@yuan243212790
Copy link

i have the same question. how do you sovle it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants