From 80a76df479ad117dffae15ecc83adcef956aa8f4 Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 15:30:08 +0200 Subject: [PATCH 01/28] Revise AMD installation guide --- docs/AMD-INSTALLATION.md | 191 +++++++++++++++++++++++++++------------ 1 file changed, 134 insertions(+), 57 deletions(-) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index 4f05589eb..b12277017 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -1,44 +1,48 @@ -# Installation Guide +# AMD Installation Guide for Windows (TheRock) -This guide covers installation for specific RDNA3 and RDNA3.5 AMD CPUs (APUs) and GPUs -running under Windows. +This guide covers installation for AMD GPUs and APUs running under Windows using TheRock's official PyTorch wheels. -tl;dr: Radeon RX 7900 GOOD, RX 9700 BAD, RX 6800 BAD. (I know, life isn't fair). +## Supported GPUs -Currently supported (but not necessary tested): +Based on [TheRock's official support matrix](https://github.com/ROCm/TheRock/blob/main/SUPPORTED_GPUS.md), the following GPUs are supported on Windows: -**gfx110x**: +### **gfx110X-all** (RDNA 3): +* AMD RX 7900 XTX (gfx1100) +* AMD RX 7800 XT (gfx1101) +* AMD RX 7700 XT (gfx1101) +* AMD RX 7700S / Framework Laptop 16 (gfx1102) +* AMD Radeon 780M Laptop iGPU (gfx1103) -* Radeon RX 7600 -* Radeon RX 7700 XT -* Radeon RX 7800 XT -* Radeon RX 7900 GRE -* Radeon RX 7900 XT -* Radeon RX 7900 XTX +### **gfx1150** (RDNA 3.5 APU) +* AMD Radeon 890M (Ryzen AI 9 HX 370 - Strix Point) -**gfx1151**: +### **gfx1151** (RDNA 3.5 APU): +* AMD Strix Halo APUs -* Ryzen 7000 series APUs (Phoenix) -* Ryzen Z1 (e.g., handheld devices like the ROG Ally) +### **gfx120X-all** (RDNA 4): +* AMD RX 9060 XT (gfx1200) +* AMD RX 9060 (gfx1200) +* AMD RX 9070 XT (gfx1201) +* AMD RX 9070 (gfx1201) -**gfx1201**: - -* Ryzen 8000 series APUs (Strix Point) -* A [frame.work](https://frame.work/au/en/desktop) desktop/laptop +### Also supported: +* **gfx103X-dgpu**: (RDNA 2) +**Note:** If your GPU is not listed above, it is not supported by TheRock on Windows. Support status and future updates can be found in the [official documentation](https://github.com/ROCm/TheRock/blob/main/SUPPORTED_GPUS.md). ## Requirements -- Python 3.11 (3.12 might work, 3.10 definately will not!) +- Python 3.11 (recommended for Wan2GP - TheRock currently supports Python 3.11, 3.12, and 3.13). +- Windows 10/11 ## Installation Environment -This installation uses PyTorch 2.7.0 because that's what currently available in -terms of pre-compiled wheels. +This installation uses PyTorch wheels built by TheRock. ### Installing Python -Download Python 3.11 from [python.org/downloads/windows](https://www.python.org/downloads/windows/). Hit Ctrl+F and search for "3.11". Dont use this direct link: [https://www.python.org/ftp/python/3.11.9/python-3.11.9-amd64.exe](https://www.python.org/ftp/python/3.11.9/python-3.11.9-amd64.exe) -- that was an IQ test. +Download Python 3.11 from [python.org/downloads/windows](https://www.python.org/downloads/windows/). Press Ctrl+F and search for "3.11.". +Alternatively, you can use this direct link: [Python 3.11.9 (64-bit)](https://www.python.org/ftp/python/3.11.9/python-3.11.9-amd64.exe). After installing, make sure `python --version` works in your terminal and returns 3.11.x @@ -57,16 +61,14 @@ C:\Users\YOURNAME\AppData\Local\Programs\Python\Python311\Scripts\ C:\Users\YOURNAME\AppData\Local\Programs\Python\Python311\ ``` -If that doesnt work, scream into a bucket. - ### Installing Git -Get Git from [git-scm.com/downloads/win](https://git-scm.com/downloads/win). Default install is fine. +Download Git from [git-scm.com/downloads/windows](https://git-scm.com/downloads/windows) and install it. The default installation options are fine. -## Install (Windows, using `venv`) +## Install (Windows, using a Python `venv`) -### Step 1: Download and Set Up Environment +### Step 1: Download and set up Wan2GP Environment ```cmd :: Navigate to your desired install directory @@ -76,71 +78,146 @@ cd \your-path-to-wan2gp git clone https://github.com/deepbeepmeep/Wan2GP.git cd Wan2GP -:: Create virtual environment using Python 3.10.9 +:: Create virtual environment python -m venv wan2gp-env :: Activate the virtual environment wan2gp-env\Scripts\activate ``` -### Step 2: Install PyTorch +### Step 2: Install ROCm/PyTorch by TheRock + +**IMPORTANT:** Choose the correct index URL for your GPU family! + +#### For gfx110X-all (RX 7900 XTX, RX 7800 XT, etc.): + +```cmd +pip install --pre torch torchaudio torchvision --index-url https://rocm.nightlies.amd.com/v2/gfx110X-all/ +``` + +#### For gfx120X-all (RX 9060, RX 9070, etc.): + +```cmd +pip install --pre torch torchaudio torchvision --index-url https://rocm.nightlies.amd.com/v2/gfx120X-all/ +``` + +#### For gfx1151 (Strix Halo iGPU): -The pre-compiled wheels you need are hosted at [scottt's rocm-TheRock releases](https://github.com/scottt/rocm-TheRock/releases). Find the heading that says: +```cmd +pip install --pre torch torchaudio torchvision --index-url https://rocm.nightlies.amd.com/v2/gfx1151/ +``` -**Pytorch wheels for gfx110x, gfx1151, and gfx1201** +#### For gfx1150 (Radeon 890M - Strix Point): -Don't click this link: [https://github.com/scottt/rocm-TheRock/releases/tag/v6.5.0rc-pytorch-gfx110x](https://github.com/scottt/rocm-TheRock/releases/tag/v6.5.0rc-pytorch-gfx110x). It's just here to check if you're skimming. +```cmd +pip install --pre torch torchaudio torchvision --index-url https://rocm.nightlies.amd.com/v2-staging/gfx1150/ +``` -Copy the links of the closest binaries to the ones in the example below (adjust if you're not running Python 3.11), then hit enter. +#### For gfx103X-dgpu (RDNA 2): ```cmd -pip install ^ - https://github.com/scottt/rocm-TheRock/releases/download/v6.5.0rc-pytorch-gfx110x/torch-2.7.0a0+rocm_git3f903c3-cp311-cp311-win_amd64.whl ^ - https://github.com/scottt/rocm-TheRock/releases/download/v6.5.0rc-pytorch-gfx110x/torchaudio-2.7.0a0+52638ef-cp311-cp311-win_amd64.whl ^ - https://github.com/scottt/rocm-TheRock/releases/download/v6.5.0rc-pytorch-gfx110x/torchvision-0.22.0+9eb57cd-cp311-cp311-win_amd64.whl +pip install --pre torch torchaudio torchvision --index-url https://rocm.nightlies.amd.com/v2-staging/gfx103X-dgpu/ ``` -### Step 3: Install Dependencies +This will automatically install the latest PyTorch, torchaudio, and torchvision wheels with ROCm support. + +### Step 3: Install Wan2GP Dependencies ```cmd :: Install core dependencies pip install -r requirements.txt ``` +### Step 4: Verify Installation + +```cmd +python -c "import torch; print('PyTorch:', torch.__version__); print('ROCm available:', torch.cuda.is_available()); print('Device:', torch.cuda.get_device_name(0) if torch.cuda.is_available() else 'No GPU')" +``` + +Expected output example: +``` +PyTorch: 2.11.0+rocm7.12.0 +ROCm available: True +Device: AMD Radeon RX 9070 XT +``` + ## Attention Modes -WanGP supports several attention implementations, only one of which will work for you: +WanGP supports multiple attention implementations via [triton-windows](https://github.com/woct0rdho/triton-windows/). + +First, install `triton-windows` in your virtual environment. +If you have an older version of Triton installed, uninstall it first: + +```cmd +pip uninstall triton +pip install triton-windows +``` -- **SDPA** (default): Available by default with PyTorch. This uses the built-in aotriton accel library, so is actually pretty fast. +### Supported attention implementations -## Performance Profiles +- **Sageattention V1** (Requires the `.post26` wheel or newer): -Choose a profile based on your hardware: +```cmd +pip install "sageattention <2" +``` + +- **FlashAttention-2** (Only the Triton backend is supported): +```cmd +git clone https://github.com/Dao-AILab/flash-attention.git +cd flash-attention +pip install ninja +# Install FlashAttention-2 with the Triton backend enabled +set FLASH_ATTENTION_TRITON_AMD_ENABLE=TRUE && python setup.py install +``` -- **Profile 3 (LowRAM_HighVRAM)**: Loads entire model in VRAM, requires 24GB VRAM for 8-bit quantized 14B model -- **Profile 4 (LowRAM_LowVRAM)**: Default, loads model parts as needed, slower but lower VRAM requirement +- **SDPA** (default): Available by default in PyTorch on post-RDNA3 GPUs. ## Running Wan2GP -In future, you will have to do this: +For future sessions, activate the environment: ```cmd -cd \path-to\wan2gp -wan2gp\Scripts\activate.bat +cd \path-to\Wan2GP +wan2gp-env\Scripts\activate python wgp.py ``` -For now, you should just be able to type `python wgp.py` (because you're already in the virtual environment) - ## Troubleshooting -- If you use a HIGH VRAM mode, don't be a fool. Make sure you use VAE Tiled Decoding. +### GPU Not Detected + +If `torch.cuda.is_available()` returns `False`: + +1. **Verify your GPU is supported** - Check the [Supported GPUs](#supported-gpus) list above +2. **Check AMD drivers** - Ensure you have the latest AMD Adrenalin drivers installed +3. **Verify correct index URL** - Make sure you used the right GPU family index URL + +### Installation Errors + +**"Could not find a version that satisfies the requirement":** +- Double-check that you're using the correct `--index-url` for your GPU family. You can also try adding the `--pre` flag or replacing `/v2/` in the URL with `/v2/staging/` +- Ensure you're using Python 3.11 +- Try adding `--pre` flag if not already present + +**"No matching distribution found":** +- Your GPU architecture may not be supported +- Check that you've activated your virtual environment + +### Performance Issues + +- **Monitor VRAM usage** - Reduce batch size if running out of memory +- **Close GPU-intensive apps** - Discord hardware acceleration, browsers, etc. + +### Known Issues + +Windows packages are new and may be unstable! -### Memory Issues +Known issues are tracked at: https://github.com/ROCm/TheRock/issues/808 -- Use lower resolution or shorter videos -- Enable quantization (default) -- Use Profile 4 for lower VRAM usage -- Consider using 1.3B models instead of 14B models +## Additional Resources -For more troubleshooting, see [TROUBLESHOOTING.md](TROUBLESHOOTING.md) +- [TheRock GitHub Repository](https://github.com/ROCm/TheRock/) +- [TheRock Releases Documentation](https://github.com/ROCm/TheRock/blob/main/RELEASES.md) +- [Supported GPU Architectures](https://github.com/ROCm/TheRock/blob/main/SUPPORTED_GPUS.md) +- [TheRock Roadmap](https://github.com/ROCm/TheRock/blob/main/ROADMAP.md) +- [ROCm Documentation](https://rocm.docs.amd.com/) From 006f2f210e69c826a155aadea42b350b9fd02bb2 Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 15:43:59 +0200 Subject: [PATCH 02/28] Modify documentation for SDPA --- docs/AMD-INSTALLATION.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index b12277017..510521b8e 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -170,7 +170,7 @@ pip install ninja set FLASH_ATTENTION_TRITON_AMD_ENABLE=TRUE && python setup.py install ``` -- **SDPA** (default): Available by default in PyTorch on post-RDNA3 GPUs. +- **SDPA Flash**: Available by default in PyTorch on post-RDNA2 GPUs via AOTriton. ## Running Wan2GP From 6b952d1b274b44e6e8632e5df8a8538a2ec579f6 Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 15:46:14 +0200 Subject: [PATCH 03/28] Fix example paths and update Git installation link --- docs/AMD-INSTALLATION.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index 510521b8e..16ff51887 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -56,14 +56,14 @@ If not, you probably need to fix your PATH. Go to: Example correct entries: ```cmd -C:\Users\YOURNAME\AppData\Local\Programs\Python\Launcher\ -C:\Users\YOURNAME\AppData\Local\Programs\Python\Python311\Scripts\ -C:\Users\YOURNAME\AppData\Local\Programs\Python\Python311\ +C:\Users\\AppData\Local\Programs\Python\Launcher\ +C:\Users\\AppData\Local\Programs\Python\Python311\Scripts\ +C:\Users\\AppData\Local\Programs\Python\Python311\ ``` ### Installing Git -Download Git from [git-scm.com/downloads/windows](https://git-scm.com/downloads/windows) and install it. The default installation options are fine. +Download Git from [git-scm.com/downloads/windows](https://git-scm.com/install/windows) and install it. The default installation options are fine. ## Install (Windows, using a Python `venv`) From 38176754230a645336a8d7759301867518fb4222 Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 15:50:45 +0200 Subject: [PATCH 04/28] Clarify CMD / PowerShell --- docs/AMD-INSTALLATION.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index 16ff51887..486de7fbf 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -67,6 +67,9 @@ Download Git from [git-scm.com/downloads/windows](https://git-scm.com/install/wi ## Install (Windows, using a Python `venv`) +> **Note:** This guide uses **Windows CMD**. +> If you are using PowerShell, some commands (like comments and activating the virtual environment) may differ. + ### Step 1: Download and set up Wan2GP Environment From 8b570ad1d9d8f8c81cb04a581c3d71c52b4b60a9 Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 15:59:28 +0200 Subject: [PATCH 05/28] Update notes for CMD and Python usage --- docs/AMD-INSTALLATION.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index 486de7fbf..724bab148 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -67,7 +67,7 @@ Download Git from [git-scm.com/downloads/windows](https://git-scm.com/install/wi ## Install (Windows, using a Python `venv`) -> **Note:** This guide uses **Windows CMD**. +> **Note:** The following commands are intended for use in the Windows Command Prompt (CMD). > If you are using PowerShell, some commands (like comments and activating the virtual environment) may differ. @@ -88,6 +88,8 @@ python -m venv wan2gp-env wan2gp-env\Scripts\activate ``` +> **Note:** If you have multiple versions of Python installed, use `py -3.11` instead of `python` to ensure the correct version is used. + ### Step 2: Install ROCm/PyTorch by TheRock **IMPORTANT:** Choose the correct index URL for your GPU family! From c9835f234a4477a8a7f2d662ed353092f40f95d1 Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 16:07:11 +0200 Subject: [PATCH 06/28] Revise Python installation instructions --- docs/AMD-INSTALLATION.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index 724bab148..ec5ddba6c 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -41,10 +41,11 @@ This installation uses PyTorch wheels built by TheRock. ### Installing Python -Download Python 3.11 from [python.org/downloads/windows](https://www.python.org/downloads/windows/). Press Ctrl+F and search for "3.11.". +Download Python 3.11 from [python.org/downloads/windows](https://www.python.org/downloads/windows/). Press Ctrl+F and search for "3.11." to find the newest version available for installation. + Alternatively, you can use this direct link: [Python 3.11.9 (64-bit)](https://www.python.org/ftp/python/3.11.9/python-3.11.9-amd64.exe). -After installing, make sure `python --version` works in your terminal and returns 3.11.x +After installing, make sure `python --version` works in your terminal and returns `3.11.9` If not, you probably need to fix your PATH. Go to: @@ -88,7 +89,7 @@ python -m venv wan2gp-env wan2gp-env\Scripts\activate ``` -> **Note:** If you have multiple versions of Python installed, use `py -3.11` instead of `python` to ensure the correct version is used. +> **Note:** If you have multiple versions of Python installed, use `py -3.11 -m venv wan2gp-env` instead of `python -m venv wan2gp-env` to ensure the correct version is used. ### Step 2: Install ROCm/PyTorch by TheRock @@ -160,7 +161,7 @@ pip install triton-windows ### Supported attention implementations -- **Sageattention V1** (Requires the `.post26` wheel or newer): +- **SageAttention V1** (Requires the `.post26` wheel or newer): ```cmd pip install "sageattention <2" From d7a66d912ac950b78e2e3156e284aca05bf04261 Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 16:10:07 +0200 Subject: [PATCH 07/28] Modify installation commands for FlashAttention-2 --- docs/AMD-INSTALLATION.md | 1 - 1 file changed, 1 deletion(-) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index ec5ddba6c..e6eb357d9 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -172,7 +172,6 @@ pip install "sageattention <2" git clone https://github.com/Dao-AILab/flash-attention.git cd flash-attention pip install ninja -# Install FlashAttention-2 with the Triton backend enabled set FLASH_ATTENTION_TRITON_AMD_ENABLE=TRUE && python setup.py install ``` From ddd6e0b62a11420faf0fc1c5161a1017a2454e6f Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 16:22:08 +0200 Subject: [PATCH 08/28] Clarify steps to add Python to PATH --- docs/AMD-INSTALLATION.md | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index e6eb357d9..12bef1b55 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -47,14 +47,11 @@ Alternatively, you can use this direct link: [Python 3.11.9 (64-bit)](https://ww After installing, make sure `python --version` works in your terminal and returns `3.11.9` -If not, you probably need to fix your PATH. Go to: +If it doesn’t, you need to add Python to your PATH: -* Windows + Pause/Break -* Advanced System Settings -* Environment Variables -* Edit your `Path` under User Variables - -Example correct entries: +* Press the Windows key, type Environment Variables, and select “Edit the system environment variables”. +* In the System Properties window, click Environment Variables…. +* Under User variables, find Path, then click Edit → New and add the following entries (replace with your Windows username): ```cmd C:\Users\\AppData\Local\Programs\Python\Launcher\ @@ -62,6 +59,8 @@ C:\Users\\AppData\Local\Programs\Python\Python311\Scripts\ C:\Users\\AppData\Local\Programs\Python\Python311\ ``` +> **Note:** If Python still doesn't show the correct version after updating PATH, try signing out and signing back in to Windows to apply the changes. + ### Installing Git Download Git from [git-scm.com/downloads/windows](https://git-scm.com/install/windows) and install it. The default installation options are fine. From 5c75788d07a6321977699a8021513b46f25705bd Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 16:22:59 +0200 Subject: [PATCH 09/28] Fix formatting of note about GPU support --- docs/AMD-INSTALLATION.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index 12bef1b55..6fdebafef 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -28,7 +28,7 @@ Based on [TheRock's official support matrix](https://github.com/ROCm/TheRock/blo ### Also supported: * **gfx103X-dgpu**: (RDNA 2) -**Note:** If your GPU is not listed above, it is not supported by TheRock on Windows. Support status and future updates can be found in the [official documentation](https://github.com/ROCm/TheRock/blob/main/SUPPORTED_GPUS.md). +> **Note:** If your GPU is not listed above, it is not supported by TheRock on Windows. Support status and future updates can be found in the [official documentation](https://github.com/ROCm/TheRock/blob/main/SUPPORTED_GPUS.md). ## Requirements From c42507c4d3865da557ab3668dc8b8639f02828b6 Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 16:24:41 +0200 Subject: [PATCH 10/28] Fix formatting for RDNA2 support --- docs/AMD-INSTALLATION.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index 6fdebafef..0776c5ed5 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -26,7 +26,7 @@ Based on [TheRock's official support matrix](https://github.com/ROCm/TheRock/blo * AMD RX 9070 (gfx1201) ### Also supported: -* **gfx103X-dgpu**: (RDNA 2) +**gfx103X-dgpu**: (RDNA 2) > **Note:** If your GPU is not listed above, it is not supported by TheRock on Windows. Support status and future updates can be found in the [official documentation](https://github.com/ROCm/TheRock/blob/main/SUPPORTED_GPUS.md). From 7e9daaa75d7a79d14a28ef3b5472c5af184b133d Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 16:30:08 +0200 Subject: [PATCH 11/28] Reorganize `Supported GPUs` section to match install order --- docs/AMD-INSTALLATION.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index 0776c5ed5..82273356c 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -13,18 +13,18 @@ Based on [TheRock's official support matrix](https://github.com/ROCm/TheRock/blo * AMD RX 7700S / Framework Laptop 16 (gfx1102) * AMD Radeon 780M Laptop iGPU (gfx1103) -### **gfx1150** (RDNA 3.5 APU) -* AMD Radeon 890M (Ryzen AI 9 HX 370 - Strix Point) - -### **gfx1151** (RDNA 3.5 APU): -* AMD Strix Halo APUs - ### **gfx120X-all** (RDNA 4): * AMD RX 9060 XT (gfx1200) * AMD RX 9060 (gfx1200) * AMD RX 9070 XT (gfx1201) * AMD RX 9070 (gfx1201) +### **gfx1151** (RDNA 3.5 APU): +* AMD Strix Halo APUs + +### **gfx1150** (RDNA 3.5 APU) +* AMD Radeon 890M (Ryzen AI 9 HX 370 - Strix Point) + ### Also supported: **gfx103X-dgpu**: (RDNA 2) @@ -49,9 +49,9 @@ After installing, make sure `python --version` works in your terminal and return If it doesn’t, you need to add Python to your PATH: -* Press the Windows key, type Environment Variables, and select “Edit the system environment variables”. -* In the System Properties window, click Environment Variables…. -* Under User variables, find Path, then click Edit → New and add the following entries (replace with your Windows username): +* Press the `Windows` key, type `Environment Variables`, and select `Edit the system environment variables`. +* In the `System Properties` window, click `Environment Variables…`. +* Under `User variables`, find `Path`, then click `Edit` → `New` and add the following entries (replace `` with your Windows username): ```cmd C:\Users\\AppData\Local\Programs\Python\Launcher\ From 257b2834f0577c0cfeb6e481ca59d1c6d05c627c Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 16:35:44 +0200 Subject: [PATCH 12/28] Update section title for installation instructions --- docs/AMD-INSTALLATION.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index 82273356c..b1b04056b 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -66,7 +66,7 @@ C:\Users\\AppData\Local\Programs\Python\Python311\ Download Git from [git-scm.com/downloads/windows](https://git-scm.com/install/windows) and install it. The default installation options are fine. -## Install (Windows, using a Python `venv`) +## Installation Steps (Windows, using a Python `venv`) > **Note:** The following commands are intended for use in the Windows Command Prompt (CMD). > If you are using PowerShell, some commands (like comments and activating the virtual environment) may differ. From 4b7453f439be44a6888fb6629483288ed6b4cfb2 Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 16:40:37 +0200 Subject: [PATCH 13/28] Update installation error messages and performance tips --- docs/AMD-INSTALLATION.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index b1b04056b..3926487c4 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -200,7 +200,7 @@ If `torch.cuda.is_available()` returns `False`: **"Could not find a version that satisfies the requirement":** - Double-check that you're using the correct `--index-url` for your GPU family. You can also try adding the `--pre` flag or replacing `/v2/` in the URL with `/v2/staging/` -- Ensure you're using Python 3.11 +- Ensure you're using Python 3.11, and not 3.10 - Try adding `--pre` flag if not already present **"No matching distribution found":** @@ -209,12 +209,12 @@ If `torch.cuda.is_available()` returns `False`: ### Performance Issues -- **Monitor VRAM usage** - Reduce batch size if running out of memory -- **Close GPU-intensive apps** - Discord hardware acceleration, browsers, etc. +- **Monitor VRAM usage** - Reduce batch size or resolution if running out of memory +- **Close GPU-intensive apps** - Apps with hardware acceleration enabled (browsers, Discord etc.). ### Known Issues -Windows packages are new and may be unstable! +Windows packages are new and may be unstable. Known issues are tracked at: https://github.com/ROCm/TheRock/issues/808 From c231c627fde0ca9984922c9c08bdfdd2d27f9501 Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 16:46:24 +0200 Subject: [PATCH 14/28] Clarify notes --- docs/AMD-INSTALLATION.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index 3926487c4..9dd2965ec 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -28,6 +28,8 @@ Based on [TheRock's official support matrix](https://github.com/ROCm/TheRock/blo ### Also supported: **gfx103X-dgpu**: (RDNA 2) +
+ > **Note:** If your GPU is not listed above, it is not supported by TheRock on Windows. Support status and future updates can be found in the [official documentation](https://github.com/ROCm/TheRock/blob/main/SUPPORTED_GPUS.md). ## Requirements @@ -178,7 +180,7 @@ set FLASH_ATTENTION_TRITON_AMD_ENABLE=TRUE && python setup.py install ## Running Wan2GP -For future sessions, activate the environment: +For future sessions, activate the environment every time if it isn't already activated, then run `python wgp.py`: ```cmd cd \path-to\Wan2GP From c699c9eaa69abaea4e51cc0b52a9ec17cfebd11b Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 16:47:55 +0200 Subject: [PATCH 15/28] Fix formatting --- docs/AMD-INSTALLATION.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index 9dd2965ec..ca5d38288 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -26,7 +26,7 @@ Based on [TheRock's official support matrix](https://github.com/ROCm/TheRock/blo * AMD Radeon 890M (Ryzen AI 9 HX 370 - Strix Point) ### Also supported: -**gfx103X-dgpu**: (RDNA 2) +### **gfx103X-dgpu**: (RDNA 2)
From 3e87d2636ae1a2307e169defe727ea75bbe607db Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 16:51:50 +0200 Subject: [PATCH 16/28] Update links in AMD installation documentation --- docs/AMD-INSTALLATION.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index ca5d38288..814cfda9e 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -223,7 +223,7 @@ Known issues are tracked at: https://github.com/ROCm/TheRock/issues/808 ## Additional Resources - [TheRock GitHub Repository](https://github.com/ROCm/TheRock/) -- [TheRock Releases Documentation](https://github.com/ROCm/TheRock/blob/main/RELEASES.md) +- [Releases Documentation](https://github.com/ROCm/TheRock/blob/main/RELEASES.md) - [Supported GPU Architectures](https://github.com/ROCm/TheRock/blob/main/SUPPORTED_GPUS.md) -- [TheRock Roadmap](https://github.com/ROCm/TheRock/blob/main/ROADMAP.md) +- [Roadmap](https://github.com/ROCm/TheRock/blob/main/ROADMAP.md) - [ROCm Documentation](https://rocm.docs.amd.com/) From 9b7f01571c16e64afbc8304b4df2bda4fe96a0be Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 17:06:36 +0200 Subject: [PATCH 17/28] Enhance documentation with setup tips --- docs/AMD-INSTALLATION.md | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index 814cfda9e..c815a87c8 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -188,6 +188,32 @@ wan2gp-env\Scripts\activate python wgp.py ``` +It is advised to set the following environment variables at the start of every new session (you can create a `.bat` file that activates your venv, sets these, then launches `wgp.py`): + +```cmd +set ROCM_HOME=%ROCM_ROOT% +set PATH=%ROCM_ROOT%\lib\llvm\bin;%ROCM_BIN%;%PATH% +set CC=clang-cl +set CXX=clang-cl +set DISTUTILS_USE_SDK=1 +set FLASH_ATTENTION_TRITON_AMD_ENABLE=TRUE +set TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1 +``` + +MIOpen (AMD's cuDNN equivalent) is not yet stable; it frequently causes OOMs, crashes the display driver, and significantly increases generation times. Currently, it is recommended to use fast mode by setting `set MIOPEN_FIND_MODE=FAST`, or to disable it entirely by editing `wgp.py` and adding the following line below `import torch` (line 51): + +```cmd +torch.backends.cudnn.enabled = False +``` + +To verify that it is disabled, or to enable verbose logging, you can set: + +```cmd +set MIOPEN_ENABLE_LOGGING=1 +set MIOPEN_ENABLE_LOGGING_CMD=1 +set MIOPEN_LOG_LEVEL=5 +``` + ## Troubleshooting ### GPU Not Detected From 0e5fcdcac19d94e78ef3f3b277fa7956c8fe1828 Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 17:11:41 +0200 Subject: [PATCH 18/28] Add advice on setting environment variables for AMD Added advice on setting environment variables for AMD. --- docs/AMD-INSTALLATION.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index c815a87c8..2a233f9a2 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -185,6 +185,7 @@ For future sessions, activate the environment every time if it isn't already act ```cmd cd \path-to\Wan2GP wan2gp-env\Scripts\activate +:: Add the AMD-specific environment variables mentioned below here python wgp.py ``` From 366d0c20635a49427d7988e357dd0fe5a18ab71f Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 17:22:46 +0200 Subject: [PATCH 19/28] Modify installation commands --- docs/AMD-INSTALLATION.md | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index 2a233f9a2..47dbd6a80 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -99,31 +99,31 @@ wan2gp-env\Scripts\activate #### For gfx110X-all (RX 7900 XTX, RX 7800 XT, etc.): ```cmd -pip install --pre torch torchaudio torchvision --index-url https://rocm.nightlies.amd.com/v2/gfx110X-all/ +pip install --pre torch torchaudio torchvision rocm[devel] --index-url https://rocm.nightlies.amd.com/v2/gfx110X-all/ ``` #### For gfx120X-all (RX 9060, RX 9070, etc.): ```cmd -pip install --pre torch torchaudio torchvision --index-url https://rocm.nightlies.amd.com/v2/gfx120X-all/ +pip install --pre torch torchaudio torchvision rocm[devel] --index-url https://rocm.nightlies.amd.com/v2/gfx120X-all/ ``` #### For gfx1151 (Strix Halo iGPU): ```cmd -pip install --pre torch torchaudio torchvision --index-url https://rocm.nightlies.amd.com/v2/gfx1151/ +pip install --pre torch torchaudio torchvision rocm[devel] --index-url https://rocm.nightlies.amd.com/v2/gfx1151/ ``` #### For gfx1150 (Radeon 890M - Strix Point): ```cmd -pip install --pre torch torchaudio torchvision --index-url https://rocm.nightlies.amd.com/v2-staging/gfx1150/ +pip install --pre torch torchaudio torchvision rocm[devel] --index-url https://rocm.nightlies.amd.com/v2-staging/gfx1150/ ``` #### For gfx103X-dgpu (RDNA 2): ```cmd -pip install --pre torch torchaudio torchvision --index-url https://rocm.nightlies.amd.com/v2-staging/gfx103X-dgpu/ +pip install --pre torch torchaudio torchvision rocm[devel] --index-url https://rocm.nightlies.amd.com/v2-staging/gfx103X-dgpu/ ``` This will automatically install the latest PyTorch, torchaudio, and torchvision wheels with ROCm support. @@ -152,12 +152,16 @@ Device: AMD Radeon RX 9070 XT WanGP supports multiple attention implementations via [triton-windows](https://github.com/woct0rdho/triton-windows/). -First, install `triton-windows` in your virtual environment. -If you have an older version of Triton installed, uninstall it first: +First, install `triton-windows` in your virtual environment. +If you have an older version of Triton installed, uninstall it first. +ROCm SDK needs to be initialized. +Visual Studio environment should also be activated. ```cmd pip uninstall triton pip install triton-windows +rocm-sdk init +"C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvars64.bat" >nul 2>&1 ``` ### Supported attention implementations From b356855f7d68481d2558379069a4d841ef9e509b Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 17:27:59 +0200 Subject: [PATCH 20/28] Add `packaging` module installation to `FlashAttention-2` --- docs/AMD-INSTALLATION.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index 47dbd6a80..5d21b58d5 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -177,6 +177,7 @@ pip install "sageattention <2" git clone https://github.com/Dao-AILab/flash-attention.git cd flash-attention pip install ninja +pip install packaging set FLASH_ATTENTION_TRITON_AMD_ENABLE=TRUE && python setup.py install ``` From 9641ade88de1b64e17ef029d65366d77c249b706 Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 17:36:54 +0200 Subject: [PATCH 21/28] Revise MIOpen stability notes and instructions --- docs/AMD-INSTALLATION.md | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index 5d21b58d5..acfd6a98c 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -206,7 +206,13 @@ set FLASH_ATTENTION_TRITON_AMD_ENABLE=TRUE set TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1 ``` -MIOpen (AMD's cuDNN equivalent) is not yet stable; it frequently causes OOMs, crashes the display driver, and significantly increases generation times. Currently, it is recommended to use fast mode by setting `set MIOPEN_FIND_MODE=FAST`, or to disable it entirely by editing `wgp.py` and adding the following line below `import torch` (line 51): +MIOpen (AMD’s equivalent of NVIDIA’s cuDNN) is not yet fully stable on several architectures; it can cause out-of-memory errors (OOMs), crash the display driver, or significantly increase generation times. Currently, it is recommended to either use fast mode by setting: + +```cmd +set MIOPEN_FIND_MODE=FAST +``` + +or to disable MIOpen entirely by editing `wgp.py` and adding the following line below `import torch` (around line 51): ```cmd torch.backends.cudnn.enabled = False From b7981b17dbbd43e49395b12f32d7e393d0696c0a Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 18:51:18 +0200 Subject: [PATCH 22/28] Fix formatting --- docs/AMD-INSTALLATION.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index acfd6a98c..6b39346d4 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -22,7 +22,7 @@ Based on [TheRock's official support matrix](https://github.com/ROCm/TheRock/blo ### **gfx1151** (RDNA 3.5 APU): * AMD Strix Halo APUs -### **gfx1150** (RDNA 3.5 APU) +### **gfx1150** (RDNA 3.5 APU): * AMD Radeon 890M (Ryzen AI 9 HX 370 - Strix Point) ### Also supported: From 43332efd04282c51c3e974f89b85b619a9f32e29 Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 18:52:15 +0200 Subject: [PATCH 23/28] Update AMD GPU support details in README --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index d3854f79c..0e8145fd2 100644 --- a/README.md +++ b/README.md @@ -8,7 +8,7 @@ WanGP supports the Wan (and derived models) but also Hunyuan Video, Flux, Qwen, Z-Image, LongCat, Kandinsky, LTXV, LTX-2, Qwen3 TTS, Chatterbox, HearMula, ... with: - Low VRAM requirements (as low as 6 GB of VRAM is sufficient for certain models) - Support for old Nvidia GPUs (RTX 10XX, 20xx, ...) -- Support for AMD GPUs Radeon RX 76XX, 77XX, 78XX & 79XX, instructions in the Installation Section Below. +- Support for AMD GPUs (RDNA 4, 3, 3.5, and 2) instructions in the Installation Section Below. - Very Fast on the latest GPUs - Easy to use Full Web based interface - Support for many checkpoint Quantized formats: int8, fp8, gguf, NV FP4, Nunchaku @@ -303,7 +303,7 @@ For detailed installation instructions for different GPU generations: ### AMD For detailed installation instructions for different GPU generations: -- **[Installation Guide](docs/AMD-INSTALLATION.md)** - Complete setup instructions for Radeon RX 76XX, 77XX, 78XX & 79XX +- **[Installation Guide](docs/AMD-INSTALLATION.md)** - Complete setup instructions for RDNA 4, 3, 3.5, and 2 ## 🎯 Usage From 9bf280dc5f3c7dc59dcbb4104ec53f286f5730a9 Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 18:54:40 +0200 Subject: [PATCH 24/28] Fix formatting --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index b55a45915..886441787 100644 --- a/README.md +++ b/README.md @@ -8,7 +8,7 @@ WanGP supports the Wan (and derived models) but also Hunyuan Video, Flux, Qwen, Z-Image, LongCat, Kandinsky, LTXV, LTX-2, Qwen3 TTS, Chatterbox, HearMula, ... with: - Low VRAM requirements (as low as 6 GB of VRAM is sufficient for certain models) - Support for old Nvidia GPUs (RTX 10XX, 20xx, ...) -- Support for AMD GPUs (RDNA 4, 3, 3.5, and 2) instructions in the Installation Section Below. +- Support for AMD GPUs (RDNA 4, 3, 3.5, and 2), instructions in the Installation Section Below. - Very Fast on the latest GPUs - Easy to use Full Web based interface - Support for many checkpoint Quantized formats: int8, fp8, gguf, NV FP4, Nunchaku From 9534c0999d6d07337eeab724938ef2f4c71a0604 Mon Sep 17 00:00:00 2001 From: DELUXA Date: Wed, 4 Feb 2026 19:03:15 +0200 Subject: [PATCH 25/28] Add troubleshooting link for Wan2GP --- docs/AMD-INSTALLATION.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index 6b39346d4..5114026c3 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -265,3 +265,5 @@ Known issues are tracked at: https://github.com/ROCm/TheRock/issues/808 - [Supported GPU Architectures](https://github.com/ROCm/TheRock/blob/main/SUPPORTED_GPUS.md) - [Roadmap](https://github.com/ROCm/TheRock/blob/main/ROADMAP.md) - [ROCm Documentation](https://rocm.docs.amd.com/) + +For additional troubleshooting guidance for Wan2GP, see [TROUBLESHOOTING.md](https://github.com/deepbeepmeep/Wan2GP/blob/main/docs/TROUBLESHOOTING.md). From ae4a335543c4de715f9d6ea6d0d8251dfc395843 Mon Sep 17 00:00:00 2001 From: DELUXA Date: Mon, 9 Feb 2026 13:28:22 +0200 Subject: [PATCH 26/28] Update --- docs/AMD-INSTALLATION.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index 5114026c3..a1f7f3af6 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -30,7 +30,7 @@ Based on [TheRock's official support matrix](https://github.com/ROCm/TheRock/blo
-> **Note:** If your GPU is not listed above, it is not supported by TheRock on Windows. Support status and future updates can be found in the [official documentation](https://github.com/ROCm/TheRock/blob/main/SUPPORTED_GPUS.md). +> **Note:** If your GPU is not listed above, it may not supported by TheRock on Windows. Support status and future updates can be found in the [official documentation](https://github.com/ROCm/TheRock/blob/main/SUPPORTED_GPUS.md). ## Requirements @@ -166,7 +166,7 @@ rocm-sdk init ### Supported attention implementations -- **SageAttention V1** (Requires the `.post26` wheel or newer): +- **SageAttention V1** (Requires the `.post26` wheel or newer to fix Triton compilation issues without needing unofficial patches. Download it from [this](https://github.com/Comfy-Org/wheels/actions/runs/21343435018) URL) ```cmd pip install "sageattention <2" From 7b955884f260b87e314122d6c651de3322d5e618 Mon Sep 17 00:00:00 2001 From: DELUXA Date: Mon, 9 Feb 2026 13:30:49 +0200 Subject: [PATCH 27/28] Update --- docs/AMD-INSTALLATION.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index a1f7f3af6..9fc5802d2 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -30,7 +30,7 @@ Based on [TheRock's official support matrix](https://github.com/ROCm/TheRock/blo
-> **Note:** If your GPU is not listed above, it may not supported by TheRock on Windows. Support status and future updates can be found in the [official documentation](https://github.com/ROCm/TheRock/blob/main/SUPPORTED_GPUS.md). +> **Note:** If your GPU is not listed above, it may not be supported by TheRock on Windows. Support status and future updates can be found in the [official documentation](https://github.com/ROCm/TheRock/blob/main/SUPPORTED_GPUS.md). ## Requirements From 86aec40832c99db1ebd3dc9017d22c8740c245c7 Mon Sep 17 00:00:00 2001 From: DELUXA Date: Mon, 9 Feb 2026 13:35:57 +0200 Subject: [PATCH 28/28] Update --- docs/AMD-INSTALLATION.md | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/docs/AMD-INSTALLATION.md b/docs/AMD-INSTALLATION.md index 9fc5802d2..8663cb0db 100644 --- a/docs/AMD-INSTALLATION.md +++ b/docs/AMD-INSTALLATION.md @@ -212,10 +212,15 @@ MIOpen (AMD’s equivalent of NVIDIA’s cuDNN) is not yet fully stable on sever set MIOPEN_FIND_MODE=FAST ``` -or to disable MIOpen entirely by editing `wgp.py` and adding the following line below `import torch` (around line 51): +Alternatively, you can disable MIOpen entirely by editing `wgp.py` and adding the following line below `import torch` (around line 51): ```cmd -torch.backends.cudnn.enabled = False +... +:: /Lines already in the file/ +:: import torch +torch.backends.cudnn.enabled = False # <-- Add this here +:: import gc +... ``` To verify that it is disabled, or to enable verbose logging, you can set: