-
Notifications
You must be signed in to change notification settings - Fork 19
Open
Description
Hi,
Thanks for the nice work, I tested run some small images and the preliminary results looks great!
But I was trying to apply inferencing on large image 8k x 8k with provided weight resulting in some GPU memory issue:
(my maximum GPU ram for each core is 32 GB)
RuntimeError: CUDA out of memory. Tried to allocate 1.49 GiB (GPU 1; 31.75 GiB total capacity; 29.74 GiB already allocated; 673.75 MiB free; 29.77 GiB reserved in total by PyTorch)
Do you have some suggestions on how to resolve it ?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels