You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The --max_loaded_models has been deprecated/removed by InvokeAI upstream in version 3. You can use the --ram and --vram flags to control how much RAM and VRAM to allocate to the model cache instead.
Issue
While switching the models, the "VRAM in use" keeps increasing to a point where the application crashes with message "CUDA out of memory"
Potential Solution
The invoke.ai application seems to have a "--max_loaded_models 1" option, which isn't working when launched through nixified-ai:
Steps to Reproduce
Clarification/Request
How to ensure VRAM doesn't keep increasing when switching models?
The text was updated successfully, but these errors were encountered: