Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] tvm.cuda().exist return false while torch.cuda.is_available() return true #17558

Open
vfdff opened this issue Dec 14, 2024 · 1 comment
Labels
needs-triage PRs or issues that need to be investigated by maintainers to find the right assignees to address it type: bug

Comments

@vfdff
Copy link

vfdff commented Dec 14, 2024

As the title mentions, tvm.cuda().exist return unexpected value

Expected behavior

tvm.cuda().exist return true

Actual behavior

tvm.cuda().exist return false

Environment

Any environment details, such as: Operating System, TVM version, etc
a) gpu

(venv) root@d00469708debug6-7c75445547-frh8j:/usr1/project/zhongyunde/source/osdi22_artifact/artifacts/roller# nvidia-smi   
Wed Dec 11 19:48:21 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03             Driver Version: 535.129.03   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA L40S                    Off | 00000000:9A:00.0 Off |                    0 |
| N/A   27C    P8              34W / 350W |     13MiB / 46068MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|

b) tvm: apache-tvm 0.11.1
c) mlc-ai: mlc-ai-nightly-cu122-0.1

Steps to reproduce

pip install apache-tvm
python3 -m pip install --pre mlc-ai-nightly-cu122 -- don't fix

Triage

Please refer to the list of label tags here to find the relevant tags and add them below in a bullet format (example below).

  • needs-triage
@vfdff vfdff added needs-triage PRs or issues that need to be investigated by maintainers to find the right assignees to address it type: bug labels Dec 14, 2024
@vfdff
Copy link
Author

vfdff commented Dec 23, 2024

I try to build the tvm base on source on gpu machine (83),
(venv3) root@d00469708debug6-7c75445547-frh8j:/usr1/project/zhongyunde/source/tvm/cmake# git diff .

diff --git a/cmake/config.cmake b/cmake/config.cmake
index 791751a..a2e293a 100644
--- a/cmake/config.cmake
+++ b/cmake/config.cmake
@@ -46,7 +46,7 @@
 # - ON: enable CUDA with cmake's auto search
 # - OFF: disable CUDA
 # - /path/to/cuda: use specific path to cuda toolkit
-set(USE_CUDA OFF)
+set(USE_CUDA ON)
 
 # Whether to enable NCCL support:
 # - ON: enable NCCL with cmake's auto search
@@ -158,7 +158,8 @@ set(USE_PROFILER ON)
 # - OFF: disable llvm, note this will disable CPU codegen
 #        which is needed for most cases
 # - /path/to/llvm-config: enable specific LLVM when multiple llvm-dev is available.
-set(USE_LLVM OFF)
+set(USE_LLVM "/usr/bin/llvm-config --link-static")
+set(HIDE_PRIVATE_SYMBOLS ON)
 
 # Whether use MLIR to help analyze, requires USE_LLVM is enabled
 # Possible values: ON/OFF
@@ -307,7 +308,7 @@ set(USE_CLML_GRAPH_EXECUTOR OFF)
 set(USE_ANTLR OFF)
 
 # Whether use Relay debug mode
-set(USE_RELAY_DEBUG OFF)
+set(USE_RELAY_DEBUG ON)

and It is surprised that finally there is only libtvm_runtime.so and no libtvm.so under the build ?
#17562

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-triage PRs or issues that need to be investigated by maintainers to find the right assignees to address it type: bug
Projects
None yet
Development

No branches or pull requests

1 participant