-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ERROR whle running comfyui with zluda-4 + cu12.4 + torch2.5.1 #57
Comments
This repository is continuing project of original ZLUDA 3 project of vosen. |
Got it, Thanks for replying. |
According to my investigation and the comments of vosen, the original author of ZLUDA, we may already have everything that is needed to run CUDA 12 applications. However, for unknown reasons, CUDA runtime (the application side) behaves quite differently between torch+cu118 and +cu12x. It invokes |
Hi! When will torch 2.4.1 and above be supported? I installed hip 6.2, it works, but on torch 2.3.1. Many applications require a higher version. I tried different versions of zluda, including nightly, but to no avail. For example, triton is used to speed up generations, but requires cu124 |
Currently, torch cu12x is not supported. I described the reason in my previous comment.
Note that it is only available on gfx1100/01/02/03/50 because of hipBLASLt limitation. |
It doesn't work on gtx1010 (rx 5700 xt) with torch 2.4/2.5. |
hipBLASLt does only support gfx90a, 94x, 110x as you can see here. |
RX6000 and below, need this environment variable under windows, same as linux. Line 229 in c4994b3
DISABLE_ADDMM_CUDA_LT=1 I confirmed that A1111-ZLUDA, Forge-ZLUDA and SD.Next can generate SDXL images using torch2.4.1, 2.5.1 and 2.6.0. However, some extension which use Triton does not work. bitsandbytes either.
|
Thank you for sharing your discovery. I'll update zluda installer to set |
Errow while using comfyiu with zluda-4.
Hello, i am using comfyui with zluda-4 for a while. Thanks for the great work.
Well, I am using 5800X and 6750GRE 10, comfyui with cuda118 and torch 2.5.1(torch 2.3 also OK too), it works fine for the 2 or 3 months.
Then i notice the new zluda-4 released, also the HIP 6.2.4. so here was what done:
install the new HIP 6.2.4 from AMD drivers. (was 6.1 2024Q3) , modify ROCM_HOME, HIP_HOME to the HIP 6.2.4 installed path, also add hip-6.2.4/bin to the PATH.
patch HIP with rocm.gfx1031.for.hip.sdk.6.2.4.littlewu.s.logic(1), from https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU/releases/tag/v0.6.2.4
conda create a new venv, install torch 2.5.1 + cuda12.4, and copied cublas64_12.dll, cusparse64_12.dll, nvrtc64_120_0.dll to ....Lib\site-packages\torch\lib, those file from https://github.com/lshqqytiger/ZLUDA/releases. the <ZLUDA-windows-rocm6-amd64.zip>.
those was fine while in zluda-3 + cuda112 + torch2.5.1
torch.backends.cudnn.enabled = False
torch.backends.cuda.enable_flash_sdp(False)
torch.backends.cuda.enable_math_sdp(True)
torch.backends.cuda.enable_mem_efficient_sdp(False)
Then run zluda.exe python main.py.
Here's the ERROR from comfyui.
How should i do to solve this. Or How should do while running comfy with zluda-4 with cu12.4 ? Thansk a lot.
and btw, i notice that in https://github.com/lshqqytiger/ZLUDA/releases:
New environment variable ZLUDA_NVRTC_LIB: our new ZLUDA runtime compiler depends on the original NVIDIA runtime compiler library, so you should specify the path of it unless it is named nvrtc_cuda.dll.
Thanks again.
The text was updated successfully, but these errors were encountered: