Skip to content

Unique3D MVDiffusion Model headdim should be in [64, 96, 128 #424

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
MikhalenkaA opened this issue Jan 12, 2025 · 2 comments
Open

Unique3D MVDiffusion Model headdim should be in [64, 96, 128 #424

MikhalenkaA opened this issue Jan 12, 2025 · 2 comments

Comments

@MikhalenkaA
Copy link

Hi,
Use worflow for generating 4 views from the image, but get this error, and don't know what to do with it


# ComfyUI Error Report
## Error Details
- **Node ID:** 12
- **Node Type:** [Comfy3D] Unique3D MVDiffusion Model
- **Exception Type:** AssertionError
- **Exception Message:** headdim should be in [64, 96, 128].
## Stack Trace

File "C:\Users\User\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\User\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\User\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)

File "C:\Users\User\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\User\ComfyUI\custom_nodes\comfyui-3d-pack\nodes.py", line 2867, in run_model
image_pils = unique3d_pipe(
^^^^^^^^^^^^^^

File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\User\ComfyUI\custom_nodes\ComfyUI-3D-Pack\Gen_3D_Modules\Unique3D\custum_3d_diffusion\custum_pipeline\unifield_pipeline_img2mvimg.py", line 255, in call
noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=image_embeddings, condition_latents=cond_latents, noisy_condition_input=False, cond_pixels_clip=image_pixels).sample
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\User\ComfyUI\custom_nodes\ComfyUI-3D-Pack\Gen_3D_Modules\Unique3D\custum_3d_diffusion\custum_modules\unifield_processor.py", line 115, in forward
return self.forward_hook(super().forward, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\User\ComfyUI\custom_nodes\ComfyUI-3D-Pack\Gen_3D_Modules\Unique3D\custum_3d_diffusion\custum_modules\unifield_processor.py", line 460, in unet_forward_hook
return raw_forward(sample, timestep, encoder_hidden_states, *args, cross_attention_kwargs=cross_attention_kwargs, class_labels=class_labels, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\diffusers\models\unets\unet_2d_condition.py", line 1216, in forward
sample, res_samples = downsample_block(
^^^^^^^^^^^^^^^^^

File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\diffusers\models\unets\unet_2d_blocks.py", line 1334, in forward
hidden_states = attn(
^^^^^

File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\diffusers\models\transformers\transformer_2d.py", line 442, in forward
hidden_states = block(
^^^^^^

File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\diffusers\models\attention.py", line 514, in forward
attn_output = self.attn1(
^^^^^^^^^^^

File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\User\ComfyUI\custom_nodes\ComfyUI-3D-Pack\Gen_3D_Modules\Unique3D\custum_3d_diffusion\custum_modules\attention_processors.py", line 377, in forward
return self.processor(
^^^^^^^^^^^^^^^

File "C:\Users\User\ComfyUI\custom_nodes\ComfyUI-3D-Pack\Gen_3D_Modules\Unique3D\custum_3d_diffusion\custum_modules\attention_processors.py", line 230, in call
hidden_states = self.chained_proc(attn, hidden_states, encoder_hidden_states, attention_mask, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\diffusers\models\attention_processor.py", line 3286, in call
hidden_states = F.scaled_dot_product_attention(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\sageattention\core.py", line 82, in sageattn
assert headdim in [64, 96, 128], "headdim should be in [64, 96, 128]."
^^^^^^^^^^^^^^^^^^^^^^^^

## System Information
- **ComfyUI Version:** 0.3.10
- **Arguments:** main.py
- **OS:** nt
- **Python Version:** 3.11.11 | packaged by conda-forge | (main, Dec  5 2024, 14:06:23) [MSC v.1942 64 bit (AMD64)]
- **Embedded Python:** false
- **PyTorch Version:** 2.5.1+cu121
## Devices

- **Name:** cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
  - **Type:** cuda
  - **VRAM Total:** 25756696576
  - **VRAM Free:** 16405959762
  - **Torch VRAM Total:** 7683964928
  - **Torch VRAM Free:** 84374610

## Logs

2025-01-12T12:58:54.402035 - [START] Security scan2025-01-12T12:58:54.402035 -
2025-01-12T12:58:55.195587 - [DONE] Security scan2025-01-12T12:58:55.195587 -
2025-01-12T12:58:55.291609 - ## ComfyUI-Manager: installing dependencies done.2025-01-12T12:58:55.291609 -
2025-01-12T12:58:55.291609 - ** ComfyUI startup time:2025-01-12T12:58:55.291609 - 2025-01-12T12:58:55.291609 - 2025-01-12 12:58:55.2912025-01-12T12:58:55.291609 -
2025-01-12T12:58:55.291609 - ** Platform:2025-01-12T12:58:55.292610 - 2025-01-12T12:58:55.292610 - Windows2025-01-12T12:58:55.292610 -
2025-01-12T12:58:55.292610 - ** Python version:2025-01-12T12:58:55.292610 - 2025-01-12T12:58:55.292610 - 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:06:23) [MSC v.1942 64 bit (AMD64)]2025-01-12T12:58:55.292610 -
2025-01-12T12:58:55.292610 - ** Python executable:2025-01-12T12:58:55.292610 - 2025-01-12T12:58:55.292610 - C:\Users\User\AppData\Roaming\micromamba\envs\3d12\python.exe2025-01-12T12:58:55.292610 -
2025-01-12T12:58:55.292610 - ** ComfyUI Path:2025-01-12T12:58:55.292610 - 2025-01-12T12:58:55.292610 - C:\Users\User\ComfyUI2025-01-12T12:58:55.292610 -
2025-01-12T12:58:55.292610 - ** User directory:2025-01-12T12:58:55.292610 - 2025-01-12T12:58:55.292610 - C:\Users\User\ComfyUI\user2025-01-12T12:58:55.292610 -
2025-01-12T12:58:55.292610 - ** ComfyUI-Manager config path:2025-01-12T12:58:55.292610 - 2025-01-12T12:58:55.293609 - C:\Users\User\ComfyUI\user\default\ComfyUI-Manager\config.ini2025-01-12T12:58:55.293609 -
2025-01-12T12:58:55.293609 - ** Log path:2025-01-12T12:58:55.293609 - 2025-01-12T12:58:55.293609 - C:\Users\User\ComfyUI\user\comfyui.log2025-01-12T12:58:55.293609 -
2025-01-12T12:58:56.347478 -
Prestartup times for custom nodes:
2025-01-12T12:58:56.347478 - 2.4 seconds: C:\Users\User\ComfyUI\custom_nodes\ComfyUI-Manager
2025-01-12T12:58:56.347478 -
2025-01-12T12:58:57.706924 - Total VRAM 24564 MB, total RAM 130265 MB
2025-01-12T12:58:57.706924 - pytorch version: 2.5.1+cu121
2025-01-12T12:58:59.271917 - xformers version: 0.0.28.post3
2025-01-12T12:58:59.271917 - Set vram state to: NORMAL_VRAM
2025-01-12T12:58:59.271917 - Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
2025-01-12T12:58:59.414857 - Using xformers attention
2025-01-12T12:59:00.583062 - [Prompt Server] web root: C:\Users\User\ComfyUI\web
2025-01-12T12:59:02.291715 - C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\kiui\nn_init_.py:31: FutureWarning: torch.cuda.amp.custom_fwd(args...) is deprecated. Please use torch.amp.custom_fwd(args..., device_type='cuda') instead.
@torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)

2025-01-12T12:59:02.291715 - C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\kiui\nn_init_.py:37: FutureWarning: torch.cuda.amp.custom_bwd(args...) is deprecated. Please use torch.amp.custom_bwd(args..., device_type='cuda') instead.
@torch.cuda.amp.custom_bwd

2025-01-12T12:59:02.494552 - Warn!: xFormers is available (Attention)
2025-01-12T12:59:04.076372 - Warn!: C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\utils\cpp_extension.py:1964: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
warnings.warn(

2025-01-12T12:59:05.870914 - Warn!: C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\pytorch3d\vis_init_.py:16: UserWarning: Cannot import plotly-based visualization code. Please install plotly to enable (pip install plotly).
warnings.warn(

2025-01-12T12:59:05.877981 - [SPARSE] Backend: spconv, Attention: xformers2025-01-12T12:59:05.877981 -
2025-01-12T12:59:06.268835 - Added trellis path to sys.path: C:\Users\User\ComfyUI\custom_nodes\ComfyUI-IF_Trellis\trellis
2025-01-12T12:59:06.304629 - [SPARSE] Backend: spconv, Attention: flash_attn2025-01-12T12:59:06.304629 -
2025-01-12T12:59:06.877406 - Warp 1.5.1 initialized:
CUDA Toolkit 12.6, Driver 12.7
Devices:
"cpu" : "AMD64 Family 25 Model 97 Stepping 2, AuthenticAMD"
"cuda:0" : "NVIDIA GeForce RTX 4090" (24 GiB, sm_89, mempool enabled)
Kernel cache:
C:\Users\User\AppData\Local\NVIDIA\warp\Cache\1.5.12025-01-12T12:59:06.877406 -
2025-01-12T12:59:06.887017 - [SPARSE][CONV] spconv algo: native2025-01-12T12:59:06.887017 -
2025-01-12T12:59:06.889017 - Trellis package imported successfully
2025-01-12T12:59:06.907760 - ----------Jake Upgrade Nodes Loaded----------2025-01-12T12:59:06.907760 -
2025-01-12T12:59:06.924262 - Total VRAM 24564 MB, total RAM 130265 MB
2025-01-12T12:59:06.924262 - pytorch version: 2.5.1+cu121
2025-01-12T12:59:06.924262 - xformers version: 0.0.28.post3
2025-01-12T12:59:06.924262 - Set vram state to: NORMAL_VRAM
2025-01-12T12:59:06.924262 - Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
2025-01-12T12:59:06.949905 - ### Loading: ComfyUI-Manager (V3.6.5)
2025-01-12T12:59:07.053324 - ### ComfyUI Version: v0.3.10-47-gee8a7ab6 | Released on '2025-01-11'
2025-01-12T12:59:07.675764 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
2025-01-12T12:59:07.682444 -
Import times for custom nodes:
2025-01-12T12:59:07.682444 - 0.0 seconds: C:\Users\User\ComfyUI\custom_nodes\websocket_image_save.py
2025-01-12T12:59:07.682444 - 0.0 seconds: C:\Users\User\ComfyUI\custom_nodes\comfyui_ipadapter_plus
2025-01-12T12:59:07.682444 - 0.0 seconds: C:\Users\User\ComfyUI\custom_nodes\comfyui-jakeupgrade
2025-01-12T12:59:07.683444 - 0.0 seconds: C:\Users\User\ComfyUI\custom_nodes\comfyui-kjnodes
2025-01-12T12:59:07.683444 - 0.1 seconds: C:\Users\User\ComfyUI\custom_nodes\comfyui-videohelpersuite
2025-01-12T12:59:07.683444 - 0.6 seconds: C:\Users\User\ComfyUI\custom_nodes\ComfyUI-IF_Trellis
2025-01-12T12:59:07.683444 - 0.6 seconds: C:\Users\User\ComfyUI\custom_nodes\ComfyUI-Manager
2025-01-12T12:59:07.683444 - 5.2 seconds: C:\Users\User\ComfyUI\custom_nodes\comfyui-3d-pack
2025-01-12T12:59:07.683444 -
2025-01-12T12:59:07.690460 - Starting server

2025-01-12T12:59:07.690460 - To see the GUI go to: http://127.0.0.1:8188
2025-01-12T12:59:07.718028 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
2025-01-12T12:59:07.761128 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
2025-01-12T12:59:07.783145 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
2025-01-12T12:59:07.807508 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
2025-01-12T12:59:07.808512 - FETCH DATA from: C:\Users\User\ComfyUI\user\default\ComfyUI-Manager\cache\2233941102_nodes_page_1_limit_1000.json2025-01-12T12:59:07.808512 - 2025-01-12T12:59:07.823522 - [DONE]2025-01-12T12:59:07.823522 -
2025-01-12T12:59:07.831144 - nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/cache
2025-01-12T12:59:07.831144 - FETCH DATA from: C:\Users\User\ComfyUI\user\default\ComfyUI-Manager\cache\1514988643_custom-node-list.json2025-01-12T12:59:07.831144 - 2025-01-12T12:59:08.043973 - [DONE]2025-01-12T12:59:08.043973 -
2025-01-12T12:59:09.626360 - FETCH DATA from: C:\Users\User\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json2025-01-12T12:59:09.626360 - 2025-01-12T12:59:09.631363 - [DONE]2025-01-12T12:59:09.631363 -
2025-01-12T12:59:11.746035 - got prompt
2025-01-12T12:59:11.752082 - [ATTENTION] Set backend to: sage
2025-01-12T12:59:11.752082 - Environment configured - Backend: spconv, Attention: sage, Smooth K: True, SpConv Algo: implicit_gemm
2025-01-12T12:59:11.752082 - [ATTENTION] Set backend to: sage
2025-01-12T12:59:11.752082 - Loading local model: ss_dec_conv3d_16l8_fp162025-01-12T12:59:11.752082 -
2025-01-12T12:59:12.109529 - Loading local model: ss_flow_img_dit_L_16l8_fp162025-01-12T12:59:12.109529 -
2025-01-12T12:59:15.532600 - Loading local model: slat_dec_gs_swin8_B_64l8gs32_fp162025-01-12T12:59:15.532600 -
2025-01-12T12:59:16.079340 - Loading local model: slat_dec_rf_swin8_B_64l8r16_fp162025-01-12T12:59:16.080340 -
2025-01-12T12:59:16.617520 - Loading local model: slat_dec_mesh_swin8_B_64l8m256c_fp162025-01-12T12:59:16.618520 -
2025-01-12T12:59:20.595219 - Loading local model: slat_flow_img_dit_L_64l8p2_fp162025-01-12T12:59:20.595219 -
2025-01-12T12:59:24.231897 - Loading DINOv2 model from C:\Users\User\ComfyUI\models\classifiers\dinov2_vitl14_reg.pth2025-01-12T12:59:24.231897 -
2025-01-12T12:59:24.872516 - Using cache found in C:\Users\User/.cache\torch\hub\facebookresearch_dinov2_main
2025-01-12T12:59:24.879707 - Warn!: C:\Users\User/.cache\torch\hub\facebookresearch_dinov2_main\dinov2\layers\swiglu_ffn.py:43: UserWarning: xFormers is available (SwiGLU)
warnings.warn("xFormers is available (SwiGLU)")

2025-01-12T12:59:24.880707 - Warn!: C:\Users\User/.cache\torch\hub\facebookresearch_dinov2_main\dinov2\layers\attention.py:27: UserWarning: xFormers is available (Attention)
warnings.warn("xFormers is available (Attention)")

2025-01-12T12:59:24.882158 - Warn!: C:\Users\User/.cache\torch\hub\facebookresearch_dinov2_main\dinov2\layers\block.py:33: UserWarning: xFormers is available (Block)
warnings.warn("xFormers is available (Block)")

2025-01-12T12:59:24.884161 - using MLP layer as FFN
2025-01-12T12:59:28.135140 -
Fetching 11 files: 0%| | 0/11 [00:00<?, ?it/s]2025-01-12T12:59:28.139654 -
Fetching 11 files: 100%|█████████████████████████████████████████████████████████████| 11/11 [00:00<00:00, 2437.26it/s]2025-01-12T12:59:28.139654 -
2025-01-12T12:59:29.968321 -
Loading pipeline components...: 80%|█████████████████████████████████████████▌ | 4/5 [00:01<00:00, 2.02it/s]2025-01-12T12:59:29.973322 -
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 5/5 [00:01<00:00, 2.73it/s]2025-01-12T12:59:29.973322 -
2025-01-12T12:59:29.973322 - You have disabled the safety checker for <class 'Unique3D.custum_3d_diffusion.custum_pipeline.unifield_pipeline_img2mvimg.StableDiffusionImage2MVCustomPipeline'> by passing safety_checker=None. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at huggingface/diffusers#254 .
2025-01-12T12:59:36.391701 - �[34m[Comfy3D] �[0m[Load_Unique3D_Custom_UNet] loaded unet ckpt from C:\Users\User\ComfyUI\custom_nodes\ComfyUI-3D-Pack\Checkpoints\Diffusers\MrForExample/Unique3D\image2mvimage\unet_state_dict.pth�[0m2025-01-12T12:59:36.391701 -
2025-01-12T12:59:36.963542 -
0%| | 0/64 [00:00<?, ?it/s]2025-01-12T12:59:36.972316 - Warn!: Warning! condition_latents is not None, but self_attn_ref is not enabled! This warning will only be raised once.
2025-01-12T12:59:37.002346 -
0%| | 0/64 [00:00<?, ?it/s]2025-01-12T12:59:37.002346 -
2025-01-12T12:59:37.014899 - !!! Exception during processing !!! headdim should be in [64, 96, 128].
2025-01-12T12:59:37.020910 - Traceback (most recent call last):
File "C:\Users\User\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "C:\Users\User\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\ComfyUI\custom_nodes\comfyui-3d-pack\nodes.py", line 2867, in run_model
image_pils = unique3d_pipe(
^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\ComfyUI\custom_nodes\ComfyUI-3D-Pack\Gen_3D_Modules\Unique3D\custum_3d_diffusion\custum_pipeline\unifield_pipeline_img2mvimg.py", line 255, in call
noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=image_embeddings, condition_latents=cond_latents, noisy_condition_input=False, cond_pixels_clip=image_pixels).sample
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\ComfyUI\custom_nodes\ComfyUI-3D-Pack\Gen_3D_Modules\Unique3D\custum_3d_diffusion\custum_modules\unifield_processor.py", line 115, in forward
return self.forward_hook(super().forward, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\ComfyUI\custom_nodes\ComfyUI-3D-Pack\Gen_3D_Modules\Unique3D\custum_3d_diffusion\custum_modules\unifield_processor.py", line 460, in unet_forward_hook
return raw_forward(sample, timestep, encoder_hidden_states, *args, cross_attention_kwargs=cross_attention_kwargs, class_labels=class_labels, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\diffusers\models\unets\unet_2d_condition.py", line 1216, in forward
sample, res_samples = downsample_block(
^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\diffusers\models\unets\unet_2d_blocks.py", line 1334, in forward
hidden_states = attn(
^^^^^
File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\diffusers\models\transformers\transformer_2d.py", line 442, in forward
hidden_states = block(
^^^^^^
File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\diffusers\models\attention.py", line 514, in forward
attn_output = self.attn1(
^^^^^^^^^^^
File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\ComfyUI\custom_nodes\ComfyUI-3D-Pack\Gen_3D_Modules\Unique3D\custum_3d_diffusion\custum_modules\attention_processors.py", line 377, in forward
return self.processor(
^^^^^^^^^^^^^^^
File "C:\Users\User\ComfyUI\custom_nodes\ComfyUI-3D-Pack\Gen_3D_Modules\Unique3D\custum_3d_diffusion\custum_modules\attention_processors.py", line 230, in call
hidden_states = self.chained_proc(attn, hidden_states, encoder_hidden_states, attention_mask, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\diffusers\models\attention_processor.py", line 3286, in call
hidden_states = F.scaled_dot_product_attention(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Roaming\micromamba\envs\3d12\Lib\site-packages\sageattention\core.py", line 82, in sageattn
assert headdim in [64, 96, 128], "headdim should be in [64, 96, 128]."
^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: headdim should be in [64, 96, 128].

2025-01-12T12:59:37.023911 - Prompt executed in 25.28 seconds

## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":16,"last_link_id":26,"nodes":[{"id":7,"type":"LoadImage","pos":[-476.92041015625,929.7394409179688],"size":[315,314.0000305175781],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[12],"slot_index":0,"shape":3},{"name":"MASK","type":"MASK","links":[5],"slot_index":1,"shape":3}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["iddqd1794_3d_catoon_arabian_man_on_white_background_--style_r_f20d9994-4e5a-4772-a0c2-d3d03ade02dd_3 (1).png","image"]},{"id":11,"type":"SaveImage","pos":[979.765625,1017.4754638671875],"size":[676.5562133789062,636.1328735351562],"flags":{},"order":9,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":9}],"outputs":[],"properties":{},"widgets_values":["Unique3D/RGB/rgb"]},{"id":15,"type":"Preview3D","pos":[1396.89599609375,375.04693603515625],"size":[315,550],"flags":{},"order":11,"mode":0,"inputs":[{"name":"model_file","type":"STRING","link":17,"widget":{"name":"model_file"}}],"outputs":[],"properties":{"Node name for S&R":"Preview3D"},"widgets_values":["ar/ar.glb",true,"perspective","front","original","#000000",10,"original",75,null]},{"id":1,"type":"IF_TrellisCheckpointLoader","pos":[420,360],"size":[315,202],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"model","type":"TRELLIS_MODEL","links":[16],"slot_index":0}],"properties":{"Node name for S&R":"IF_TrellisCheckpointLoader"},"widgets_values":["TRELLIS-image-large","dinov2_vitl14_reg",true,"sage",true,"implicit_gemm","cuda"]},{"id":16,"type":"IF_TrellisImageTo3D","pos":[909.5982666015625,391.7608337402344],"size":[340.20001220703125,506],"flags":{},"order":10,"mode":0,"inputs":[{"name":"model","type":"TRELLIS_MODEL","link":16},{"name":"images","type":"IMAGE","link":20},{"name":"masks","type":"MASK","link":null,"shape":7}],"outputs":[{"name":"model_file","type":"STRING","links":[17],"slot_index":0},{"name":"video_path","type":"STRING","links":null},{"name":"texture_image","type":"IMAGE","links":null}],"properties":{"Node name for S&R":"IF_TrellisImageTo3D"},"widgets_values":["multi",1100018185,"randomize",7.5,12,3,12,0.9,1024,"fast",1,"stochastic","ar",false,false,false,false,false]},{"id":14,"type":"[Comfy3D] Load Diffusers Pipeline","pos":[-774.2947998046875,1376.2247314453125],"size":[430.03009033203125,154],"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"pipe","type":"DIFFUSERS_PIPE","links":[7],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"[Comfy3D] Load Diffusers Pipeline"},"widgets_values":["Unique3DImage2MVCustomPipeline","MrForExample/Unique3D","",false,"image2mvimage"]},{"id":12,"type":"[Comfy3D] Unique3D MVDiffusion Model","pos":[504.6682434082031,997.548828125],"size":[380.4000244140625,222],"flags":{},"order":8,"mode":0,"inputs":[{"name":"unique3d_pipe","type":"DIFFUSERS_PIPE","link":10},{"name":"reference_image","type":"IMAGE","link":26}],"outputs":[{"name":"multiviews","type":"IMAGE","links":[9,20],"slot_index":0,"shape":3},{"name":"orbit_camposes","type":"ORBIT_CAMPOSES","links":null,"shape":3}],"properties":{"Node name for S&R":"[Comfy3D] Unique3D MVDiffusion Model"},"widgets_values":[999,"fixed",2,64,256,4,true]},{"id":6,"type":"InvertMask","pos":[-178.06585693359375,1313.4935302734375],"size":[210,26],"flags":{},"order":4,"mode":0,"inputs":[{"name":"mask","type":"MASK","link":5}],"outputs":[{"name":"MASK","type":"MASK","links":[13],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"InvertMask"},"widgets_values":[]},{"id":13,"type":"Reroute","pos":[-53.25873565673828,901.8922119140625],"size":[75,26],"flags":{},"order":3,"mode":0,"inputs":[{"name":"","type":"","link":12}],"outputs":[{"name":"","type":"IMAGE","links":[8],"slot_index":0}],"properties":{"showOutputText":false,"horizontal":false}},{"id":9,"type":"[Comfy3D] Load Unique3D Custom UNet","pos":[-294.294921875,1376.2247314453125],"size":[315,58],"flags":{},"order":5,"mode":0,"inputs":[{"name":"pipe","type":"DIFFUSERS_PIPE","link":7}],"outputs":[{"name":"pipe","type":"DIFFUSERS_PIPE","links":[6],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"[Comfy3D] Load Unique3D Custom UNet"},"widgets_values":["image2mvimage"]},{"id":10,"type":"[Comfy3D] Image Add Pure Color Background","pos":[110.03157043457031,1441.9537353515625],"size":[315,126],"flags":{},"order":6,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":8},{"name":"masks","type":"MASK","link":13}],"outputs":[{"name":"images","type":"IMAGE","links":[26],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"[Comfy3D] Image Add Pure Color Background"},"widgets_values":[255,255,255]},{"id":8,"type":"[Comfy3D] Set Diffusers Pipeline Scheduler","pos":[71.65397644042969,1001.0775146484375],"size":[412.3726501464844,58],"flags":{},"order":7,"mode":0,"inputs":[{"name":"pipe","type":"DIFFUSERS_PIPE","link":6}],"outputs":[{"name":"pipe","type":"DIFFUSERS_PIPE","links":[10],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"[Comfy3D] Set Diffusers Pipeline Scheduler"},"widgets_values":["EulerAncestralDiscreteScheduler"]}],"links":[[5,7,1,6,0,"MASK"],[6,9,0,8,0,"DIFFUSERS_PIPE"],[7,14,0,9,0,"DIFFUSERS_PIPE"],[8,13,0,10,0,"IMAGE"],[9,12,0,11,0,"IMAGE"],[10,8,0,12,0,"DIFFUSERS_PIPE"],[12,7,0,13,0,""],[13,6,0,10,1,"MASK"],[16,1,0,16,0,"TRELLIS_MODEL"],[17,16,0,15,0,"STRING"],[20,12,0,16,1,"IMAGE"],[26,10,0,12,1,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":1.083470594338841,"offset":[738.536780391988,-656.0318300468546]},"ue_links":[],"node_versions":{"comfy-core":"0.3.10","ComfyUI-IF_Trellis":"93a98c5778cab7fa256e3b262de48757dd722246","comfyui-3d-pack":"a35a737676cf3cbb23360d98032870e242dae199"},"VHS_latentpreview":false,"VHS_latentpreviewrate":0},"version":0.4}


## Additional Context
(Please add any additional context or steps to reproduce the error here)

@tilllt
Copy link

tilllt commented Feb 1, 2025

Uninstall xformers and sageattention

https://www.reddit.com/r/comfyui/s/K509Vdl1Fp

@liaceboy
Copy link

liaceboy commented Feb 7, 2025

Uninstall xformers and sageattention has no effect

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants