This guide covers installation for different GPU generations and operating systems.
-
Git Git Download
-
Build Tools for Visual Studio 2022 with C++ Extentions Vs2022 Download
-
Cuda Toolkit 12.8 or higher Cuda Toolkit Download
-
Nvidia Drivers Up to Date Nvidia Drivers Download
-
FFMPEG downloaded, unzipped & the bin folder on PATH FFMPEG Download
-
Python 3.10.9 Python Download
-
Miniconda Miniconda Download or Python venv
This installation uses PyTorch 2.6.0, Cuda 12.6 for GTX 10XX - RTX 30XX & PyTorch 2.7.1, Cuda 12.8 for RTX 40XX - 50XX which are well-tested and stable. Unless you need absolutely to use Pytorch compilation (with RTX 50xx), it is not recommeneded to use PytTorch 2.8.0 as some System RAM memory leaks have been observed when switching models.
First, Create a folder named Wan2GP, then open it, then right click & select "open in terminal", then copy & paste the following commands, one at a time.
git clone https://github.com/deepbeepmeep/Wan2GP.git
conda create -n wan2gp python=3.10.9
conda activate wan2gp
pip install torch==2.6.0+cu126 torchvision==0.21.0+cu126 torchaudio==2.6.0+cu126 --index-url https://download.pytorch.org/whl/cu126pip install -r requirements.txt
pip install torch==2.6.0+cu126 torchvision==0.21.0+cu126 torchaudio==2.6.0+cu126 --index-url https://download.pytorch.org/whl/cu126
pip install -U "triton-windows<3.3"
pip install sageattention==1.0.6
pip install -r requirements.txt
pip install torch==2.6.0+cu126 torchvision==0.21.0+cu126 torchaudio==2.6.0+cu126 --index-url https://download.pytorch.org/whl/cu126
pip install -U "triton-windows<3.3"
pip install https://github.com/woct0rdho/SageAttention/releases/download/v2.1.1-windows/sageattention-2.1.1+cu126torch2.6.0-cp310-cp310-win_amd64.whl
pip install -r requirements.txt
pip install torch==2.7.1 torchvision==0.22.1 torchaudio==2.7.1 --index-url https://download.pytorch.org/whl/cu128
pip install -U "triton-windows<3.4"
pip install https://github.com/woct0rdho/SageAttention/releases/download/v2.2.0-windows/sageattention-2.2.0+cu128torch2.7.1-cp310-cp310-win_amd64.whl
pip install -r requirements.txt
pip install https://github.com/Redtash1/Flash_Attention_2_Windows/releases/download/v2.7.0-v2.7.4/flash_attn-2.7.4.post1+cu128torch2.7.0cxx11abiFALSE-cp310-cp310-win_amd64.whl
git clone https://github.com/deepbeepmeep/Wan2GP.git
cd Wan2GP
conda create -n wan2gp python=3.10.9
conda activate wan2gp
pip install torch==2.6.0+cu126 torchvision==0.21.0+cu126 torchaudio==2.6.0+cu126 --index-url https://download.pytorch.org/whl/cu126pip install -r requirements.txt
pip install torch==2.6.0+cu126 torchvision==0.21.0+cu126 torchaudio==2.6.0+cu126 --index-url https://download.pytorch.org/whl/cu126
pip install -U "triton<3.3"
pip install sageattention==1.0.6
pip install -r requirements.txt
pip install torch==2.6.0+cu126 torchvision==0.21.0+cu126 torchaudio==2.6.0+cu126 --index-url https://download.pytorch.org/whl/cu126
pip install -U "triton<3.3"
python -m pip install "setuptools<=75.8.2" --force-reinstall
git clone https://github.com/thu-ml/SageAttention
cd SageAttention
pip install -e .
pip install -r requirements.txt
pip install torch==2.7.1 torchvision==0.22.1 torchaudio==2.7.1 --index-url https://download.pytorch.org/whl/cu128
pip install -U "triton<3.4"
python -m pip install "setuptools<=75.8.2" --force-reinstall
git clone https://github.com/thu-ml/SageAttention
cd SageAttention
pip install -e .
pip install -r requirements.txt
pip install flash-attn==2.7.2.post1
- SDPA (default): Available by default with PyTorch
- Sage: 30% speed boost with small quality cost
- Sage2: 40% speed boost
- Flash: Good performance, may be complex to install on Windows
- RTX 10XX: SDPA
- RTX 20XX: SPDA, Sage1
- RTX 30XX, 40XX: SDPA, Flash Attention, Xformers, Sage1, Sage2
- RTX 50XX: SDPA, Flash Attention, Xformers, Sage2
Choose a profile based on your hardware:
- Profile 3 (LowRAM_HighVRAM): Loads entire model in VRAM, requires 24GB VRAM for 8-bit quantized 14B model
- Profile 4 (LowRAM_LowVRAM): Default, loads model parts as needed, slower but lower VRAM requirement
If Sage attention doesn't work:
- Check if Triton is properly installed
- Clear Triton cache
- Fallback to SDPA attention:
python wgp.py --attention sdpa
- Use lower resolution or shorter videos
- Enable quantization (default)
- Use Profile 4 for lower VRAM usage
- Consider using 1.3B models instead of 14B models
For more troubleshooting, see TROUBLESHOOTING.md