-
Notifications
You must be signed in to change notification settings - Fork 824
LTX-2.3 training requires a lot of VRAM #180
Copy link
Copy link
Open
Description
I run the trainer on a 5090 on Debian testing, in a Docker container. I remember training 2.0 just fine but 2.3 needs so much VRAM I have to close everything and it still takes 31 GB, almost OOMing. I use the same config based on ltx2_av_lora_low_vram.yaml No validation, no samples, no audio, using only images in the dataset. 8-bit quants and gradient checkpointing enabled, lora rank is 32 (not 16 but it worked before). Could it be because of the bigger connector? But I assume it can be unloaded after caching the latents?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels