Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

out of memery error in windows #63

Closed
thelatinodancer opened this issue Nov 27, 2024 · 1 comment
Closed

out of memery error in windows #63

thelatinodancer opened this issue Nov 27, 2024 · 1 comment

Comments

@thelatinodancer
Copy link

i have 3060 ti with 8gb vram.
when i run

Loading personal and system profiles took 953ms.
(base) PS C:\Windows\system32> e:
(base) PS E:> cd MagicQuill
(base) PS E:\MagicQuill> conda activate MagicQuill
(MagicQuill) PS E:\MagicQuill> $env:CUDA_VISIBLE_DEVICES=0; python gradio_run.py |
Total VRAM 8192 MB, total RAM 32607 MB
pytorch version: 2.1.2+CU121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3060 Ti : native
Using pytorch cross attention
['E:\MagicQuill', 'C:\ProgramData\miniconda3\envs\MagicQuill\python310.zip’, 'C:\ProgramData\miniconda3\envs\MagicQuill\DLLs', 'C:\ProgramData\minicond a3\envs\MagicQuill\lib’, ’C:\ProgramData\miniconda3\envs\MagicQuill’, 'C:\ProgramData\miniconda3\envs\MagicQuill\lib\site-packages', 'editable.llav
a-1.2.2.postl.finder.path_hook
’, ’E:\MagicQuill\MagicQuill’]
C:\ProgramData\miniconda3\envs\MagicQuill\lib\site-packages\huggingface_hub\file_download.py:797: Futurewarning: 'resume_download' is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use 'force_download=True'.
warnings.warn(
We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set 'max_memory' in to a higher value to use more memo ry (at your own risk). __________________________________________________________
100%|| 3/3 13.55s/it]
Some weights of the model checkpoint at E:\MagicQuill\models\llava-vl.5-7b-finetune-clean were not used when initializing LlavaLlamaForCausalLM: [’model.vision_towe r.vision_tower.vision_model.embeddings.class_embedding*, ’model.vision_tower.vision_tower.vision_model.embeddings.patch_embedding.weight’, ’model.vision_tower.visio n_tower.vision_model.embeddings.position_embedding.weight’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.0.1ayer_norml.bias’, ’model.vision_tower.vi sion_tower.vision_model.encoder.layers.0.1ayer_norml.weight’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.0.1ayer_norm2.bias’, ’model.vision_tower. vision_tower.vision_model.encoder.layers.0.1ayer_norm2.weight’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.O.mlp.fcl.bias’, ’model.vision_tower.vi sion_tower.vision_model.encoder.layers.0.mlp.fcl.weight’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.0.mlp.fc2.bias’, ’model.vision_tower.vision_t ower.vision_model.encoder.layers.0.mlp.fc2.weight’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.0.self_attn.k_proj.bias’, ’model.vision_tower.visio n_tower.vision_model.encoder.layers.0.self_attn.k_proj.weight’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.0.self_attn.out_proj.bias’, ’model.visi on_tower.vision_tower.vision_model.encoder.layers.0.self_attn.out_proj.weight’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.0.self_attn.q_proj.bias ’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.0.self_attn.q_proj.weight’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.0.self_attn .v_proj.bias’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.0.self_attn.v_proj.weight’, ’model.vision_tower.vision_tower.vision_model.encoder.layers .l.layer_norml.bias’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.l.layer_norml.weight’, ’model.vision_tower.vision_tower.vision_model.encoder.laye rs.l.layer_norm2.bias’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.l.layer_norm2.weight’, ’model.vision_tower.vision_tower.vision_model.encoder.la yers.l.mlp.fcl.bias’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.l.mlp.fcl.weight’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.l .mlp.fc2.bias’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.l.mlp.fc2.weight’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.l.self
attn.k_proj.bias’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.l.self_attn.k_proj.weight’, ’model.vision_tower.vision_tower.vision_model.encoder.la yers.l.self_attn.out_proj.bias’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.l.self_attn.out_proj.weight’, ’model.vision_tower.vision_tower.vision
model.encoder.layers.l.self_attn.q_proj.bias’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.l.self_attn.q_proj.weight’, ’model.vision_tower.vision_t ower.vision_model.encoder.layers.l.self_attn.v_proj.bias’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.l.self_attn.v_proj.weight’, ’model.vision_to wer.vision_tower.vision_model.encoder.layers.10.layer_norml.bias’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.10.layer_norml.weight’, ’model.visio n_tower.vision_tower.vision_model.encoder.layers.10.Iayer_norm2.bias’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.10.Iayer_norm2.weight’, ’model.v ision_tower.vision_tower.vision_model.encoder.layers.10.mlp.fcl.bias’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.10.mlp.fcl.weight’, ’model.visio n_tower.vision_tower.vision_model.encoder.layers.10.mlp.fc2.bias’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.10.mlp.fc2.weight’, ’model.vision_to wer.vision_tower.vision_model.encoder.layers.10.self_attn.k_proj.bias’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.10.self_attn.k_proj.weight’, ’m odel.vision_tower.vision_tower.vision_model.encoder.layers.10.self_attn.out_proj.bias’, ’model.vision_tower.vision_tower.vision_model.encoder.layers.10.self_attn.ou t proj.weight’, ’model.vision tower.vision tower.vision model.encoder.layers.10.self attn.q proj.bias’, ’model.vision tower.vision tower.vision model.encoder.layers

===================

set ~max_memory' in to a higher value to use more

  • This IS expected if you are initializing LlavaLlamaForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializ ing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing LlavaLlamaForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForS equenceClassification model from a BertForSequenceClassification model).
    C:\ProgramData\miniconda3\envs\MagicQuill\lib\site-packages\torch_utils.py:831: Userwarning: TypedStorage is edstorage will be the only storage class. This should only matter to you if you are using storages directly. storage() instead of tensor.storage()
    return self.fget.get(instance, owner)()
    We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can
    ry (at your own risk).
    Loading checkpoint from: E:\MagicQuill\models\checkpoints\SD1.5\realisticVisionV60Bl_v51VAE.safetensors
    model_type EPS
    Using pytorch attention in VAE
    Using pytorch attention in VAE
    self.brushnet_loader.inpaint_files: {’brushnet\random_mask_brushnet_ckpt\diffusion_pytorch_model.safetensors1: 1E:\MagicQuill\modelsWinpaint1, 1brushnet\segm entation_mask_brushnet_ckpt\diffusion_pytorch_model.safetensors1: 1E:\MagicQuill\modelsWinpaint1}
    BrushNet model file: E:\MagicQuill\models\inpaint\brushnet\random_mask_brushnet_ckpt\diffusion_pytorch_model.safetensorsBrushNet model type: SD1.5
    BrushNet model file: E:\MagicQuill\models\inpaint\brushnet\random_mask_brushnet_ckpt\diffusion_pytorch_model.safetensorsTraceback (most recent call last):
    File "E:\MagicQuill\gradio_run.py", line 22, in
    scribbleColorEditModel = ScribbleColorEditModel()
    File "E:\MagicQuill\MagicQuill\scribble_color_edit.py", line 30, in____init__
    self.load_models('SD1.5', 'floatl6')
    File "E:\MagicQuill\MagicQuill\scribble_color_edit.py", line 43, in load_models
    self.brushnet = self.brushnet_loader.brushnet_loading(brushnet_name, dtype)[0]
    File "E:\MagicQuill\MagicQuill\brushnet_nodes.py", line 105, in brushnet_loading
    brushnet_model = load_checkpoint_and_dispatch(
    File "C:\ProgramData\miniconda3\envs\MagicQuill\lib\site-packages\accelerate\big_modeling.py", line 613, in load_checkpoint_and_dispatch load_checkpoint_in_model(
    File "C:\ProgramData\miniconda3\envs\MagicQuill\lib\site-packages\accelerate\utils\modeling.py", line 1821, in load_checkpoint_in_model set_module_tensor_to_device(
    File "C:\ProgramData\miniconda3\envs\MagicQuill\lib\site-packages\accelerate\utils\modeling.py", line 381, in set_module_tensor_to_device
    value = value.to(dtype)
    torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacty of 8.00 GiB of which 0 bytes is free. Of the allocated memor y 7.16 GiB is allocated by PyTorch, and 161.22 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
    (MagicQuill) PS E:\MagicQuill>
    deprecated. It will be removed in the future and Untyp To access UntypedStorage directly, use tensor.untyped-
@zliucz
Copy link
Member

zliucz commented Nov 27, 2024

Check this issue#57.

@zliucz zliucz closed this as completed Dec 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants