Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'NoneType' object has no attribute 'device' #6468

Open
momoko998 opened this issue Jan 14, 2025 · 5 comments
Open

'NoneType' object has no attribute 'device' #6468

momoko998 opened this issue Jan 14, 2025 · 5 comments
Labels
User Support A user needs help with something, probably not a bug.

Comments

@momoko998
Copy link

Your question

'NoneType' object has no attribute 'device'

Logs

No response

Other

No response

@momoko998 momoko998 added the User Support A user needs help with something, probably not a bug. label Jan 14, 2025
@ltdrdata
Copy link
Collaborator

The information provided is insufficient. Please upload the complete log.
And the workflow.

@YuanJie2001
Copy link

Image
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/mydata/ComfyUI/execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "/usr/local/mydata/ComfyUI/execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/mydata/ComfyUI/nodes.py", line 67, in encode
return (clip.encode_from_tokens_scheduled(tokens), )
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/mydata/ComfyUI/comfy/sd.py", line 148, in encode_from_tokens_scheduled
pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/mydata/ComfyUI/comfy/sd.py", line 210, in encode_from_tokens
o = self.cond_stage_model.encode_token_weights(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/mydata/ComfyUI/comfy/text_encoders/flux.py", line 56, in encode_token_weights
l_out, l_pooled = self.clip_l.encode_token_weights(token_weight_pairs_l)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/mydata/ComfyUI/comfy/sd1_clip.py", line 45, in encode_token_weights
o = self.encode(to_encode)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/mydata/ComfyUI/comfy/sd1_clip.py", line 252, in encode
return self(tokens)
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/mydata/ComfyUI/comfy/sd1_clip.py", line 224, in forward
outputs = self.transformer(tokens, attention_mask_model, intermediate_output=self.layer_idx, final_layer_norm_intermediate=self.layer_norm_hidden_state, dtype=torch.float32)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/mydata/ComfyUI/comfy/clip_model.py", line 137, in forward
x = self.text_model(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/mydata/ComfyUI/comfy/clip_model.py", line 113, in forward
x, i = self.encoder(x, mask=mask, intermediate_output=intermediate_output)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/mydata/ComfyUI/comfy/clip_model.py", line 70, in forward
x = l(x, mask, optimized_attention)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/mydata/ComfyUI/comfy/clip_model.py", line 51, in forward
x += self.self_attn(self.layer_norm1(x), mask, optimized_attention)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/mydata/ComfyUI/comfy/clip_model.py", line 17, in forward
q = self.q_proj(x)
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/mydata/ComfyUI/comfy/ops.py", line 68, in forward
return self.forward_comfy_cast_weights(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/mydata/ComfyUI/custom_nodes/ComfyUI-GGUF/ops.py", line 178, in forward_comfy_cast_weights
out = super().forward_comfy_cast_weights(input, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/mydata/ComfyUI/comfy/ops.py", line 63, in forward_comfy_cast_weights
weight, bias = cast_bias_weight(self, input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/mydata/ComfyUI/comfy/ops.py", line 47, in cast_bias_weight
weight = comfy.model_management.cast_to(s.weight, dtype, device, non_blocking=non_blocking, copy=has_function)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/mydata/ComfyUI/comfy/model_management.py", line 862, in cast_to
if device is None or weight.device == device:
^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'device'

@YuanJie2001
Copy link

@spyderroque
Copy link

I got the same issue with the same error.

ComfyUI Error Report

Error Details

  • Node ID: 34
  • Node Type: CLIPTextEncode
  • Exception Type: AttributeError
  • Exception Message: 'NoneType' object has no attribute 'device'

Stack Trace

  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)

  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 67, in encode
    return (clip.encode_from_tokens_scheduled(tokens), )
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 148, in encode_from_tokens_scheduled
    pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 210, in encode_from_tokens
    o = self.cond_stage_model.encode_token_weights(tokens)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sdxl_clip.py", line 60, in encode_token_weights
    g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 45, in encode_token_weights
    o = self.encode(to_encode)
        ^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 252, in encode
    return self(tokens)
           ^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 224, in forward
    outputs = self.transformer(tokens, attention_mask_model, intermediate_output=self.layer_idx, final_layer_norm_intermediate=self.layer_norm_hidden_state, dtype=torch.float32)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 137, in forward
    x = self.text_model(*args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 113, in forward
    x, i = self.encoder(x, mask=mask, intermediate_output=intermediate_output)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 70, in forward
    x = l(x, mask, optimized_attention)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 51, in forward
    x += self.self_attn(self.layer_norm1(x), mask, optimized_attention)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 17, in forward
    q = self.q_proj(x)
        ^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 68, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF\ops.py", line 178, in forward_comfy_cast_weights
    out = super().forward_comfy_cast_weights(input, *args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 63, in forward_comfy_cast_weights
    weight, bias = cast_bias_weight(self, input)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 47, in cast_bias_weight
    weight = comfy.model_management.cast_to(s.weight, dtype, device, non_blocking=non_blocking, copy=has_function)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 862, in cast_to
    if device is None or weight.device == device:
                         ^^^^^^^^^^^^^

System Information

  • ComfyUI Version: 0.3.10
  • Arguments: ComfyUI\main.py --windows-standalone-build
  • OS: nt
  • Python Version: 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
  • Embedded Python: true
  • PyTorch Version: 2.5.1+cu124

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 4070 : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 12878086144
    • VRAM Free: 11162222828
    • Torch VRAM Total: 469762048
    • Torch VRAM Free: 59900140

Logs

2025-01-17T15:41:11.069496 - Prompt executed in 99.14 seconds
2025-01-17T15:41:12.792506 - loaded partially 9614.495805450439 9614.41015625 0
2025-01-17T15:41:12.794506 - 
0it [00:00, ?it/s]2025-01-17T15:41:12.794506 - C:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py:591: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  x, info = dpm_solver.dpm_solver_adaptive(x, dpm_solver.t(torch.tensor(sigma_max)), dpm_solver.t(torch.tensor(sigma_min)), order, rtol, atol, h_init, pcoeff, icoeff, dcoeff, accept_safety, eta, s_noise, noise_sampler)
2025-01-17T15:42:56.377758 - 
39it [01:43,  2.66s/it]2025-01-17T15:42:56.407755 - 
39it [01:43,  2.66s/it]2025-01-17T15:42:56.407755 - 
2025-01-17T15:42:56.408753 - Requested to load AutoencodingEngine
2025-01-17T15:42:57.741493 - loaded completely 9.5367431640625e+25 159.87335777282715 True
2025-01-17T15:42:58.321495 - Prompt executed in 107.09 seconds
2025-01-17T15:43:00.053869 - loaded partially 9614.495805450439 9614.41015625 0
2025-01-17T15:43:00.055869 - 
0it [00:00, ?it/s]2025-01-17T15:43:00.055869 - C:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py:591: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  x, info = dpm_solver.dpm_solver_adaptive(x, dpm_solver.t(torch.tensor(sigma_max)), dpm_solver.t(torch.tensor(sigma_min)), order, rtol, atol, h_init, pcoeff, icoeff, dcoeff, accept_safety, eta, s_noise, noise_sampler)
2025-01-17T15:44:43.617627 - 
39it [01:43,  2.66s/it]2025-01-17T15:44:43.647621 - 
39it [01:43,  2.66s/it]2025-01-17T15:44:43.647621 - 
2025-01-17T15:44:43.649623 - Requested to load AutoencodingEngine
2025-01-17T15:44:44.986054 - loaded completely 9.5367431640625e+25 159.87335777282715 True
2025-01-17T15:44:45.576056 - Prompt executed in 107.09 seconds
2025-01-17T15:44:47.304240 - loaded partially 9614.495805450439 9614.41015625 0
2025-01-17T15:44:47.306240 - 
0it [00:00, ?it/s]2025-01-17T15:44:47.306240 - C:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py:591: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  x, info = dpm_solver.dpm_solver_adaptive(x, dpm_solver.t(torch.tensor(sigma_max)), dpm_solver.t(torch.tensor(sigma_min)), order, rtol, atol, h_init, pcoeff, icoeff, dcoeff, accept_safety, eta, s_noise, noise_sampler)
2025-01-17T15:46:30.875519 - 
39it [01:43,  2.66s/it]2025-01-17T15:46:30.905519 - 
39it [01:43,  2.66s/it]2025-01-17T15:46:30.905519 - 
2025-01-17T15:46:30.906521 - Requested to load AutoencodingEngine
2025-01-17T15:46:32.244364 - loaded completely 9.5367431640625e+25 159.87335777282715 True
2025-01-17T15:46:32.817366 - Prompt executed in 107.08 seconds
2025-01-17T15:46:34.544830 - loaded partially 9614.495805450439 9614.41015625 0
2025-01-17T15:46:34.546830 - 
0it [00:00, ?it/s]2025-01-17T15:46:34.546830 - C:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py:591: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  x, info = dpm_solver.dpm_solver_adaptive(x, dpm_solver.t(torch.tensor(sigma_max)), dpm_solver.t(torch.tensor(sigma_min)), order, rtol, atol, h_init, pcoeff, icoeff, dcoeff, accept_safety, eta, s_noise, noise_sampler)
2025-01-17T15:48:18.114368 - 
39it [01:43,  2.66s/it]2025-01-17T15:48:18.144366 - 
39it [01:43,  2.66s/it]2025-01-17T15:48:18.144366 - 
2025-01-17T15:48:18.145368 - Requested to load AutoencodingEngine
2025-01-17T15:48:19.495946 - loaded completely 9.5367431640625e+25 159.87335777282715 True
2025-01-17T15:48:20.091947 - Prompt executed in 107.11 seconds
2025-01-17T15:48:21.816085 - loaded partially 9614.495805450439 9614.41015625 0
2025-01-17T15:48:21.818086 - 
0it [00:00, ?it/s]2025-01-17T15:48:21.818086 - C:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py:591: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  x, info = dpm_solver.dpm_solver_adaptive(x, dpm_solver.t(torch.tensor(sigma_max)), dpm_solver.t(torch.tensor(sigma_min)), order, rtol, atol, h_init, pcoeff, icoeff, dcoeff, accept_safety, eta, s_noise, noise_sampler)
2025-01-17T15:50:05.395243 - 
39it [01:43,  2.66s/it]2025-01-17T15:50:05.426243 - 
39it [01:43,  2.66s/it]2025-01-17T15:50:05.426243 - 
2025-01-17T15:50:05.427243 - Requested to load AutoencodingEngine
2025-01-17T15:50:06.760785 - loaded completely 9.5367431640625e+25 159.87335777282715 True
2025-01-17T15:50:07.331785 - Prompt executed in 107.08 seconds
2025-01-17T15:50:09.065400 - loaded partially 9614.495805450439 9614.41015625 0
2025-01-17T15:50:09.068401 - 
0it [00:00, ?it/s]2025-01-17T15:50:09.068401 - C:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py:591: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  x, info = dpm_solver.dpm_solver_adaptive(x, dpm_solver.t(torch.tensor(sigma_max)), dpm_solver.t(torch.tensor(sigma_min)), order, rtol, atol, h_init, pcoeff, icoeff, dcoeff, accept_safety, eta, s_noise, noise_sampler)
2025-01-17T15:52:16.548343 - 
48it [02:07,  2.66s/it]2025-01-17T15:52:16.578337 - 
48it [02:07,  2.66s/it]2025-01-17T15:52:16.578337 - 
2025-01-17T15:52:16.579339 - Requested to load AutoencodingEngine
2025-01-17T15:52:17.929000 - loaded completely 9.5367431640625e+25 159.87335777282715 True
2025-01-17T15:52:18.516001 - Prompt executed in 131.02 seconds
2025-01-17T15:52:20.251007 - loaded partially 9614.495805450439 9614.41015625 0
2025-01-17T15:52:20.253006 - 
0it [00:00, ?it/s]2025-01-17T15:52:20.253006 - C:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py:591: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  x, info = dpm_solver.dpm_solver_adaptive(x, dpm_solver.t(torch.tensor(sigma_max)), dpm_solver.t(torch.tensor(sigma_min)), order, rtol, atol, h_init, pcoeff, icoeff, dcoeff, accept_safety, eta, s_noise, noise_sampler)
2025-01-17T15:54:27.744755 - 
48it [02:07,  2.66s/it]2025-01-17T15:54:27.774755 - 
48it [02:07,  2.66s/it]2025-01-17T15:54:27.774755 - 
2025-01-17T15:54:27.775755 - Requested to load AutoencodingEngine
2025-01-17T15:54:29.101722 - loaded completely 9.5367431640625e+25 159.87335777282715 True
2025-01-17T15:54:29.696724 - Prompt executed in 131.02 seconds
2025-01-17T15:54:31.421204 - loaded partially 9614.495805450439 9614.41015625 0
2025-01-17T15:54:31.423203 - 
0it [00:00, ?it/s]2025-01-17T15:54:31.423203 - C:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py:591: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  x, info = dpm_solver.dpm_solver_adaptive(x, dpm_solver.t(torch.tensor(sigma_max)), dpm_solver.t(torch.tensor(sigma_min)), order, rtol, atol, h_init, pcoeff, icoeff, dcoeff, accept_safety, eta, s_noise, noise_sampler)
2025-01-17T15:56:38.886771 - 
48it [02:07,  2.66s/it]2025-01-17T15:56:38.916772 - 
48it [02:07,  2.66s/it]2025-01-17T15:56:38.916772 - 
2025-01-17T15:56:38.917773 - Requested to load AutoencodingEngine
2025-01-17T15:56:40.256625 - loaded completely 9.5367431640625e+25 159.87335777282715 True
2025-01-17T15:56:40.872626 - Prompt executed in 131.01 seconds
2025-01-17T15:56:42.605830 - loaded partially 9614.495805450439 9614.41015625 0
2025-01-17T15:56:42.607829 - 
0it [00:00, ?it/s]2025-01-17T15:56:42.607829 - C:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py:591: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  x, info = dpm_solver.dpm_solver_adaptive(x, dpm_solver.t(torch.tensor(sigma_max)), dpm_solver.t(torch.tensor(sigma_min)), order, rtol, atol, h_init, pcoeff, icoeff, dcoeff, accept_safety, eta, s_noise, noise_sampler)
2025-01-17T15:58:50.067739 - 
48it [02:07,  2.66s/it]2025-01-17T15:58:50.097739 - 
48it [02:07,  2.66s/it]2025-01-17T15:58:50.097739 - 
2025-01-17T15:58:50.098740 - Requested to load AutoencodingEngine
2025-01-17T15:58:51.426189 - loaded completely 9.5367431640625e+25 159.87335777282715 True
2025-01-17T15:58:52.030190 - Prompt executed in 130.99 seconds
2025-01-17T16:17:21.055620 - got prompt
2025-01-17T16:17:21.067620 - got prompt
2025-01-17T16:17:21.072621 - Requested to load FluxClipModel_
2025-01-17T16:17:21.083623 - got prompt
2025-01-17T16:17:21.094619 - got prompt
2025-01-17T16:17:23.429619 - loaded completely 9.5367431640625e+25 5313.69970703125 True
2025-01-17T16:17:27.837448 - loaded partially 9614.495805450439 9614.41015625 0
2025-01-17T16:17:27.839448 - 
0it [00:00, ?it/s]2025-01-17T16:17:27.839448 - C:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py:591: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  x, info = dpm_solver.dpm_solver_adaptive(x, dpm_solver.t(torch.tensor(sigma_max)), dpm_solver.t(torch.tensor(sigma_min)), order, rtol, atol, h_init, pcoeff, icoeff, dcoeff, accept_safety, eta, s_noise, noise_sampler)
2025-01-17T16:19:13.664001 - 
39it [01:45,  2.68s/it]2025-01-17T16:19:13.695001 - 
39it [01:45,  2.71s/it]2025-01-17T16:19:13.695001 - 
2025-01-17T16:19:13.696001 - Requested to load AutoencodingEngine
2025-01-17T16:19:15.068219 - loaded completely 9.5367431640625e+25 159.87335777282715 True
2025-01-17T16:19:15.659840 - Prompt executed in 114.60 seconds
2025-01-17T16:19:17.445046 - loaded partially 9614.495805450439 9614.41015625 0
2025-01-17T16:19:17.447047 - 
0it [00:00, ?it/s]2025-01-17T16:19:17.447047 - C:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py:591: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  x, info = dpm_solver.dpm_solver_adaptive(x, dpm_solver.t(torch.tensor(sigma_max)), dpm_solver.t(torch.tensor(sigma_min)), order, rtol, atol, h_init, pcoeff, icoeff, dcoeff, accept_safety, eta, s_noise, noise_sampler)
2025-01-17T16:21:41.778308 - 
54it [02:24,  2.68s/it]2025-01-17T16:21:41.808303 - 
54it [02:24,  2.67s/it]2025-01-17T16:21:41.808303 - 
2025-01-17T16:21:41.809306 - Requested to load AutoencodingEngine
2025-01-17T16:21:43.184454 - loaded completely 9.5367431640625e+25 159.87335777282715 True
2025-01-17T16:21:43.817455 - Prompt executed in 147.97 seconds
2025-01-17T16:21:45.633070 - loaded partially 9614.495805450439 9614.41015625 0
2025-01-17T16:21:45.636070 - 
0it [00:00, ?it/s]2025-01-17T16:21:45.636070 - C:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py:591: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  x, info = dpm_solver.dpm_solver_adaptive(x, dpm_solver.t(torch.tensor(sigma_max)), dpm_solver.t(torch.tensor(sigma_min)), order, rtol, atol, h_init, pcoeff, icoeff, dcoeff, accept_safety, eta, s_noise, noise_sampler)
2025-01-17T16:24:10.988995 - 
54it [02:25,  2.69s/it]2025-01-17T16:24:11.019995 - 
54it [02:25,  2.69s/it]2025-01-17T16:24:11.019995 - 
2025-01-17T16:24:11.020995 - Requested to load AutoencodingEngine
2025-01-17T16:24:12.383314 - loaded completely 9.5367431640625e+25 159.87335777282715 True
2025-01-17T16:24:13.051314 - Prompt executed in 149.07 seconds
2025-01-17T16:24:14.822652 - loaded partially 9614.495805450439 9614.41015625 0
2025-01-17T16:24:14.824652 - 
0it [00:00, ?it/s]2025-01-17T16:24:14.824652 - C:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py:591: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  x, info = dpm_solver.dpm_solver_adaptive(x, dpm_solver.t(torch.tensor(sigma_max)), dpm_solver.t(torch.tensor(sigma_min)), order, rtol, atol, h_init, pcoeff, icoeff, dcoeff, accept_safety, eta, s_noise, noise_sampler)
2025-01-17T16:24:47.385801 - 
12it [00:32,  2.71s/it]2025-01-17T16:24:47.415801 - 
12it [00:32,  2.72s/it]2025-01-17T16:24:47.415801 - 
2025-01-17T16:24:47.415801 - Processing interrupted
2025-01-17T16:24:47.416801 - Prompt executed in 34.19 seconds
2025-01-17T16:29:47.427066 - got prompt
2025-01-17T16:29:47.429066 - Failed to validate prompt for output 9:
2025-01-17T16:29:47.429066 - * DualCLIPLoaderGGUF 26:
2025-01-17T16:29:47.429066 -   - Value not in list: clip_name2: 't5-v1_1-xxl-encoder-Q8_0.gguf' not in ['clip_l.safetensors', 'long_clip\\ViT-L-14-TEXT-detail-improved-hiT-GmP-HF.safetensors', 'long_clip\\ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors', 't5\\google_t5-v1_1-xxl_encoderonly-fp8_e4m3fn.safetensors', 't5\\t5-v1_1-xxl-encoder-Q8_0.gguf', 't5\\t5xxl_fp8_e4m3fn.safetensors']
2025-01-17T16:29:47.429066 - * VAELoader 27:
2025-01-17T16:29:47.429066 -   - Value not in list: vae_name: 'ae.safetensors' not in ['FLUX1\\ae.safetensors', 'taesd', 'taesdxl', 'taesd3', 'taef1']
2025-01-17T16:29:47.429066 - Output will be ignored
2025-01-17T16:29:47.493066 - Prompt executed in 0.06 seconds
2025-01-17T16:29:55.520912 - got prompt
2025-01-17T16:29:55.777911 - Using pytorch attention in VAE
2025-01-17T16:29:55.778912 - Using pytorch attention in VAE
2025-01-17T16:29:55.845910 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
2025-01-17T16:29:58.549910 - 
ggml_sd_loader:2025-01-17T16:29:58.549910 - 
2025-01-17T16:29:58.549910 -  8                             1692025-01-17T16:29:58.549910 - 
2025-01-17T16:29:58.549910 -  0                              502025-01-17T16:29:58.550911 - 
2025-01-17T16:29:58.679912 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
2025-01-17T16:29:58.688911 - clip missing: ['text_projection.weight']
2025-01-17T16:29:58.698910 - Requested to load FluxClipModel_
2025-01-17T16:30:02.865634 - loaded completely 9.5367431640625e+25 5313.69970703125 True
2025-01-17T16:30:05.577567 - Requested to load CLIPVisionModelProjection
2025-01-17T16:30:06.627213 - 0 models unloaded.
2025-01-17T16:30:06.656212 - loaded partially 64.0 63.70623779296875 0
2025-01-17T16:30:09.640736 - Token indices sequence length is longer than the specified maximum sequence length for this model (186 > 77). Running this sequence through the model will result in indexing errors
2025-01-17T16:30:10.224736 - 0 models unloaded.
2025-01-17T16:30:20.969926 - loaded partially 9614.495881744384 9614.41015625 0
2025-01-17T16:31:42.533448 - 
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [01:21<00:00,  2.73s/it]2025-01-17T16:31:42.533448 - 
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [01:21<00:00,  2.72s/it]2025-01-17T16:31:42.533448 - 
2025-01-17T16:31:42.534448 - Requested to load AutoencodingEngine
2025-01-17T16:31:43.924200 - loaded completely 9.5367431640625e+25 159.87335777282715 True
2025-01-17T16:31:44.393201 - Prompt executed in 108.87 seconds
2025-01-17T16:32:08.306367 - got prompt
2025-01-17T16:32:08.324365 - Requested to load FluxClipModel_
2025-01-17T16:32:10.610364 - loaded completely 9.5367431640625e+25 5313.69970703125 True
2025-01-17T16:32:15.763469 - loaded partially 9614.495881744384 9614.41015625 0
2025-01-17T16:33:37.566879 - 
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [01:21<00:00,  2.70s/it]2025-01-17T16:33:37.566879 - 
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [01:21<00:00,  2.73s/it]2025-01-17T16:33:37.566879 - 
2025-01-17T16:33:37.567882 - Requested to load AutoencodingEngine
2025-01-17T16:33:38.973277 - loaded completely 9.5367431640625e+25 159.87335777282715 True
2025-01-17T16:33:39.426278 - Prompt executed in 91.12 seconds
2025-01-17T16:33:45.444881 - got prompt
2025-01-17T16:33:48.188882 - 
ggml_sd_loader:2025-01-17T16:33:48.189881 - 
2025-01-17T16:33:48.189881 -  8                             1692025-01-17T16:33:48.189881 - 
2025-01-17T16:33:48.189881 -  0                              502025-01-17T16:33:48.189881 - 
2025-01-17T16:33:48.604884 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
2025-01-17T16:33:48.620882 - clip missing: ['text_projection.weight']
2025-01-17T16:33:48.620882 - Processing interrupted
2025-01-17T16:33:48.621882 - Prompt executed in 3.17 seconds
2025-01-17T16:33:57.623785 - got prompt
2025-01-17T16:33:57.631785 - Requested to load FluxClipModel_
2025-01-17T16:33:59.870784 - loaded completely 9.5367431640625e+25 5313.69970703125 True
2025-01-17T16:34:01.730437 - 0 models unloaded.
2025-01-17T16:34:08.341509 - loaded partially 9614.495919891357 9614.41015625 0
2025-01-17T16:34:08.343510 - 
0it [00:00, ?it/s]2025-01-17T16:34:08.343510 - C:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py:591: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  x, info = dpm_solver.dpm_solver_adaptive(x, dpm_solver.t(torch.tensor(sigma_max)), dpm_solver.t(torch.tensor(sigma_min)), order, rtol, atol, h_init, pcoeff, icoeff, dcoeff, accept_safety, eta, s_noise, noise_sampler)
2025-01-17T16:36:09.964219 - 
45it [02:01,  2.69s/it]2025-01-17T16:36:09.995219 - 
45it [02:01,  2.70s/it]2025-01-17T16:36:09.995219 - 
2025-01-17T16:36:09.996221 - Requested to load AutoencodingEngine
2025-01-17T16:36:11.362219 - loaded completely 9.5367431640625e+25 159.87335777282715 True
2025-01-17T16:36:12.019220 - Prompt executed in 134.39 seconds
2025-01-17T16:38:14.199591 - got prompt
2025-01-17T16:38:16.865590 - 
ggml_sd_loader:2025-01-17T16:38:16.865590 - 
2025-01-17T16:38:16.865590 -  8                             1692025-01-17T16:38:16.865590 - 
2025-01-17T16:38:16.865590 -  0                              502025-01-17T16:38:16.865590 - 
2025-01-17T16:38:16.995590 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
2025-01-17T16:38:17.007591 - clip missing: ['text_projection.weight']
2025-01-17T16:38:17.018590 - Requested to load FluxClipModel_
2025-01-17T16:38:19.454590 - loaded completely 9.5367431640625e+25 5313.69970703125 True
2025-01-17T16:38:20.626822 - Requested to load CLIPVisionModelProjection
2025-01-17T16:38:21.470439 - loaded partially 64.0 63.70623779296875 0
2025-01-17T16:38:21.879440 - Token indices sequence length is longer than the specified maximum sequence length for this model (185 > 77). Running this sequence through the model will result in indexing errors
2025-01-17T16:38:29.364075 - loaded partially 9614.495881744384 9614.41015625 0
2025-01-17T16:39:51.253352 - 
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [01:21<00:00,  2.74s/it]2025-01-17T16:39:51.253352 - 
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [01:21<00:00,  2.73s/it]2025-01-17T16:39:51.253352 - 
2025-01-17T16:39:51.255352 - Requested to load AutoencodingEngine
2025-01-17T16:39:54.170351 - loaded completely 9.5367431640625e+25 159.87335777282715 True
2025-01-17T16:39:54.646351 - Prompt executed in 100.44 seconds
2025-01-17T16:44:31.466523 - got prompt
2025-01-17T16:44:31.531523 - xFormers not available
2025-01-17T16:44:31.533522 - xFormers not available
2025-01-17T16:44:31.536542 - model_path is C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts\depth-anything/Depth-Anything-V2-Large\depth_anything_v2_vitl.pth2025-01-17T16:44:31.536542 - 
2025-01-17T16:44:31.539539 - using MLP layer as FFN
2025-01-17T16:44:56.303189 - 
ggml_sd_loader:2025-01-17T16:44:56.303189 - 
2025-01-17T16:44:56.304189 -  8                             1692025-01-17T16:44:56.304189 - 
2025-01-17T16:44:56.304189 -  0                              502025-01-17T16:44:56.304189 - 
2025-01-17T16:44:56.511031 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
2025-01-17T16:44:56.556031 - clip missing: ['text_model.embeddings.token_embedding.weight', 'text_model.embeddings.position_embedding.weight', 'text_model.encoder.layers.0.layer_norm1.weight', 'text_model.encoder.layers.0.layer_norm1.bias', 'text_model.encoder.layers.0.self_attn.q_proj.weight', 'text_model.encoder.layers.0.self_attn.k_proj.weight', 'text_model.encoder.layers.0.self_attn.v_proj.weight', 'text_model.encoder.layers.0.self_attn.out_proj.weight', 'text_model.encoder.layers.0.layer_norm2.weight', 'text_model.encoder.layers.0.layer_norm2.bias', 'text_model.encoder.layers.0.mlp.fc1.weight', 'text_model.encoder.layers.0.mlp.fc2.weight', 'text_model.encoder.layers.1.layer_norm1.weight', 'text_model.encoder.layers.1.layer_norm1.bias', 'text_model.encoder.layers.1.self_attn.q_proj.weight', 'text_model.encoder.layers.1.self_attn.k_proj.weight', 'text_model.encoder.layers.1.self_attn.v_proj.weight', 'text_model.encoder.layers.1.self_attn.out_proj.weight', 'text_model.encoder.layers.1.layer_norm2.weight', 'text_model.encoder.layers.1.layer_norm2.bias', 'text_model.encoder.layers.1.mlp.fc1.weight', 'text_model.encoder.layers.1.mlp.fc2.weight', 'text_model.encoder.layers.2.layer_norm1.weight', 'text_model.encoder.layers.2.layer_norm1.bias', 'text_model.encoder.layers.2.self_attn.q_proj.weight', 'text_model.encoder.layers.2.self_attn.k_proj.weight', 'text_model.encoder.layers.2.self_attn.v_proj.weight', 'text_model.encoder.layers.2.self_attn.out_proj.weight', 'text_model.encoder.layers.2.layer_norm2.weight', 'text_model.encoder.layers.2.layer_norm2.bias', 'text_model.encoder.layers.2.mlp.fc1.weight', 'text_model.encoder.layers.2.mlp.fc2.weight', 'text_model.encoder.layers.3.layer_norm1.weight', 'text_model.encoder.layers.3.layer_norm1.bias', 'text_model.encoder.layers.3.self_attn.q_proj.weight', 'text_model.encoder.layers.3.self_attn.k_proj.weight', 'text_model.encoder.layers.3.self_attn.v_proj.weight', 'text_model.encoder.layers.3.self_attn.out_proj.weight', 'text_model.encoder.layers.3.layer_norm2.weight', 'text_model.encoder.layers.3.layer_norm2.bias', 'text_model.encoder.layers.3.mlp.fc1.weight', 'text_model.encoder.layers.3.mlp.fc2.weight', 'text_model.encoder.layers.4.layer_norm1.weight', 'text_model.encoder.layers.4.layer_norm1.bias', 'text_model.encoder.layers.4.self_attn.q_proj.weight', 'text_model.encoder.layers.4.self_attn.k_proj.weight', 'text_model.encoder.layers.4.self_attn.v_proj.weight', 'text_model.encoder.layers.4.self_attn.out_proj.weight', 'text_model.encoder.layers.4.layer_norm2.weight', 'text_model.encoder.layers.4.layer_norm2.bias', 'text_model.encoder.layers.4.mlp.fc1.weight', 'text_model.encoder.layers.4.mlp.fc2.weight', 'text_model.encoder.layers.5.layer_norm1.weight', 'text_model.encoder.layers.5.layer_norm1.bias', 'text_model.encoder.layers.5.self_attn.q_proj.weight', 'text_model.encoder.layers.5.self_attn.k_proj.weight', 'text_model.encoder.layers.5.self_attn.v_proj.weight', 'text_model.encoder.layers.5.self_attn.out_proj.weight', 'text_model.encoder.layers.5.layer_norm2.weight', 'text_model.encoder.layers.5.layer_norm2.bias', 'text_model.encoder.layers.5.mlp.fc1.weight', 'text_model.encoder.layers.5.mlp.fc2.weight', 'text_model.encoder.layers.6.layer_norm1.weight', 'text_model.encoder.layers.6.layer_norm1.bias', 'text_model.encoder.layers.6.self_attn.q_proj.weight', 'text_model.encoder.layers.6.self_attn.k_proj.weight', 'text_model.encoder.layers.6.self_attn.v_proj.weight', 'text_model.encoder.layers.6.self_attn.out_proj.weight', 'text_model.encoder.layers.6.layer_norm2.weight', 'text_model.encoder.layers.6.layer_norm2.bias', 'text_model.encoder.layers.6.mlp.fc1.weight', 'text_model.encoder.layers.6.mlp.fc2.weight', 'text_model.encoder.layers.7.layer_norm1.weight', 'text_model.encoder.layers.7.layer_norm1.bias', 'text_model.encoder.layers.7.self_attn.q_proj.weight', 'text_model.encoder.layers.7.self_attn.k_proj.weight', 'text_model.encoder.layers.7.self_attn.v_proj.weight', 'text_model.encoder.layers.7.self_attn.out_proj.weight', 'text_model.encoder.layers.7.layer_norm2.weight', 'text_model.encoder.layers.7.layer_norm2.bias', 'text_model.encoder.layers.7.mlp.fc1.weight', 'text_model.encoder.layers.7.mlp.fc2.weight', 'text_model.encoder.layers.8.layer_norm1.weight', 'text_model.encoder.layers.8.layer_norm1.bias', 'text_model.encoder.layers.8.self_attn.q_proj.weight', 'text_model.encoder.layers.8.self_attn.k_proj.weight', 'text_model.encoder.layers.8.self_attn.v_proj.weight', 'text_model.encoder.layers.8.self_attn.out_proj.weight', 'text_model.encoder.layers.8.layer_norm2.weight', 'text_model.encoder.layers.8.layer_norm2.bias', 'text_model.encoder.layers.8.mlp.fc1.weight', 'text_model.encoder.layers.8.mlp.fc2.weight', 'text_model.encoder.layers.9.layer_norm1.weight', 'text_model.encoder.layers.9.layer_norm1.bias', 'text_model.encoder.layers.9.self_attn.q_proj.weight', 'text_model.encoder.layers.9.self_attn.k_proj.weight', 'text_model.encoder.layers.9.self_attn.v_proj.weight', 'text_model.encoder.layers.9.self_attn.out_proj.weight', 'text_model.encoder.layers.9.layer_norm2.weight', 'text_model.encoder.layers.9.layer_norm2.bias', 'text_model.encoder.layers.9.mlp.fc1.weight', 'text_model.encoder.layers.9.mlp.fc2.weight', 'text_model.encoder.layers.10.layer_norm1.weight', 'text_model.encoder.layers.10.layer_norm1.bias', 'text_model.encoder.layers.10.self_attn.q_proj.weight', 'text_model.encoder.layers.10.self_attn.k_proj.weight', 'text_model.encoder.layers.10.self_attn.v_proj.weight', 'text_model.encoder.layers.10.self_attn.out_proj.weight', 'text_model.encoder.layers.10.layer_norm2.weight', 'text_model.encoder.layers.10.layer_norm2.bias', 'text_model.encoder.layers.10.mlp.fc1.weight', 'text_model.encoder.layers.10.mlp.fc2.weight', 'text_model.encoder.layers.11.layer_norm1.weight', 'text_model.encoder.layers.11.layer_norm1.bias', 'text_model.encoder.layers.11.self_attn.q_proj.weight', 'text_model.encoder.layers.11.self_attn.k_proj.weight', 'text_model.encoder.layers.11.self_attn.v_proj.weight', 'text_model.encoder.layers.11.self_attn.out_proj.weight', 'text_model.encoder.layers.11.layer_norm2.weight', 'text_model.encoder.layers.11.layer_norm2.bias', 'text_model.encoder.layers.11.mlp.fc1.weight', 'text_model.encoder.layers.11.mlp.fc2.weight', 'text_model.final_layer_norm.weight', 'text_model.final_layer_norm.bias', 'text_projection.weight']
2025-01-17T16:44:56.571031 - Requested to load SDXLClipModel
2025-01-17T16:44:56.875031 - loaded completely 9.5367431640625e+25 358.099609375 True
2025-01-17T16:44:56.880031 - !!! Exception during processing !!! 'NoneType' object has no attribute 'device'
2025-01-17T16:44:56.884035 - Traceback (most recent call last):
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 67, in encode
    return (clip.encode_from_tokens_scheduled(tokens), )
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 148, in encode_from_tokens_scheduled
    pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 210, in encode_from_tokens
    o = self.cond_stage_model.encode_token_weights(tokens)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sdxl_clip.py", line 60, in encode_token_weights
    g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 45, in encode_token_weights
    o = self.encode(to_encode)
        ^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 252, in encode
    return self(tokens)
           ^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 224, in forward
    outputs = self.transformer(tokens, attention_mask_model, intermediate_output=self.layer_idx, final_layer_norm_intermediate=self.layer_norm_hidden_state, dtype=torch.float32)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 137, in forward
    x = self.text_model(*args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 113, in forward
    x, i = self.encoder(x, mask=mask, intermediate_output=intermediate_output)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 70, in forward
    x = l(x, mask, optimized_attention)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 51, in forward
    x += self.self_attn(self.layer_norm1(x), mask, optimized_attention)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 17, in forward
    q = self.q_proj(x)
        ^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 68, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF\ops.py", line 178, in forward_comfy_cast_weights
    out = super().forward_comfy_cast_weights(input, *args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 63, in forward_comfy_cast_weights
    weight, bias = cast_bias_weight(self, input)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 47, in cast_bias_weight
    weight = comfy.model_management.cast_to(s.weight, dtype, device, non_blocking=non_blocking, copy=has_function)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 862, in cast_to
    if device is None or weight.device == device:
                         ^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'device'

2025-01-17T16:44:56.887032 - Prompt executed in 25.42 seconds
2025-01-17T16:45:08.480468 - got prompt
2025-01-17T16:45:08.489468 - Requested to load SDXLClipModel
2025-01-17T16:45:08.635468 - loaded completely 9.5367431640625e+25 358.099609375 True
2025-01-17T16:45:08.638468 - !!! Exception during processing !!! 'NoneType' object has no attribute 'device'
2025-01-17T16:45:08.639468 - Traceback (most recent call last):
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 67, in encode
    return (clip.encode_from_tokens_scheduled(tokens), )
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 148, in encode_from_tokens_scheduled
    pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 210, in encode_from_tokens
    o = self.cond_stage_model.encode_token_weights(tokens)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sdxl_clip.py", line 60, in encode_token_weights
    g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 45, in encode_token_weights
    o = self.encode(to_encode)
        ^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 252, in encode
    return self(tokens)
           ^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 224, in forward
    outputs = self.transformer(tokens, attention_mask_model, intermediate_output=self.layer_idx, final_layer_norm_intermediate=self.layer_norm_hidden_state, dtype=torch.float32)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 137, in forward
    x = self.text_model(*args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 113, in forward
    x, i = self.encoder(x, mask=mask, intermediate_output=intermediate_output)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 70, in forward
    x = l(x, mask, optimized_attention)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 51, in forward
    x += self.self_attn(self.layer_norm1(x), mask, optimized_attention)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 17, in forward
    q = self.q_proj(x)
        ^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 68, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF\ops.py", line 178, in forward_comfy_cast_weights
    out = super().forward_comfy_cast_weights(input, *args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 63, in forward_comfy_cast_weights
    weight, bias = cast_bias_weight(self, input)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 47, in cast_bias_weight
    weight = comfy.model_management.cast_to(s.weight, dtype, device, non_blocking=non_blocking, copy=has_function)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 862, in cast_to
    if device is None or weight.device == device:
                         ^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'device'

2025-01-17T16:45:08.641468 - Prompt executed in 0.16 seconds
2025-01-17T16:45:13.939778 - got prompt
2025-01-17T16:45:14.574776 - Using pytorch attention in VAE
2025-01-17T16:45:14.575778 - Using pytorch attention in VAE
2025-01-17T16:45:14.643779 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
2025-01-17T16:45:14.689777 - model_path is C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts\depth-anything/Depth-Anything-V2-Large\depth_anything_v2_vitl.pth2025-01-17T16:45:14.689777 - 
2025-01-17T16:45:14.693777 - using MLP layer as FFN
2025-01-17T16:45:22.341777 - 
ggml_sd_loader:2025-01-17T16:45:22.342261 - 
2025-01-17T16:45:22.342261 -  8                             1692025-01-17T16:45:22.342261 - 
2025-01-17T16:45:22.342261 -  0                              502025-01-17T16:45:22.342261 - 
2025-01-17T16:45:22.539277 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
2025-01-17T16:45:22.570274 - clip missing: ['text_model.embeddings.token_embedding.weight', 'text_model.embeddings.position_embedding.weight', 'text_model.encoder.layers.0.layer_norm1.weight', 'text_model.encoder.layers.0.layer_norm1.bias', 'text_model.encoder.layers.0.self_attn.q_proj.weight', 'text_model.encoder.layers.0.self_attn.k_proj.weight', 'text_model.encoder.layers.0.self_attn.v_proj.weight', 'text_model.encoder.layers.0.self_attn.out_proj.weight', 'text_model.encoder.layers.0.layer_norm2.weight', 'text_model.encoder.layers.0.layer_norm2.bias', 'text_model.encoder.layers.0.mlp.fc1.weight', 'text_model.encoder.layers.0.mlp.fc2.weight', 'text_model.encoder.layers.1.layer_norm1.weight', 'text_model.encoder.layers.1.layer_norm1.bias', 'text_model.encoder.layers.1.self_attn.q_proj.weight', 'text_model.encoder.layers.1.self_attn.k_proj.weight', 'text_model.encoder.layers.1.self_attn.v_proj.weight', 'text_model.encoder.layers.1.self_attn.out_proj.weight', 'text_model.encoder.layers.1.layer_norm2.weight', 'text_model.encoder.layers.1.layer_norm2.bias', 'text_model.encoder.layers.1.mlp.fc1.weight', 'text_model.encoder.layers.1.mlp.fc2.weight', 'text_model.encoder.layers.2.layer_norm1.weight', 'text_model.encoder.layers.2.layer_norm1.bias', 'text_model.encoder.layers.2.self_attn.q_proj.weight', 'text_model.encoder.layers.2.self_attn.k_proj.weight', 'text_model.encoder.layers.2.self_attn.v_proj.weight', 'text_model.encoder.layers.2.self_attn.out_proj.weight', 'text_model.encoder.layers.2.layer_norm2.weight', 'text_model.encoder.layers.2.layer_norm2.bias', 'text_model.encoder.layers.2.mlp.fc1.weight', 'text_model.encoder.layers.2.mlp.fc2.weight', 'text_model.encoder.layers.3.layer_norm1.weight', 'text_model.encoder.layers.3.layer_norm1.bias', 'text_model.encoder.layers.3.self_attn.q_proj.weight', 'text_model.encoder.layers.3.self_attn.k_proj.weight', 'text_model.encoder.layers.3.self_attn.v_proj.weight', 'text_model.encoder.layers.3.self_attn.out_proj.weight', 'text_model.encoder.layers.3.layer_norm2.weight', 'text_model.encoder.layers.3.layer_norm2.bias', 'text_model.encoder.layers.3.mlp.fc1.weight', 'text_model.encoder.layers.3.mlp.fc2.weight', 'text_model.encoder.layers.4.layer_norm1.weight', 'text_model.encoder.layers.4.layer_norm1.bias', 'text_model.encoder.layers.4.self_attn.q_proj.weight', 'text_model.encoder.layers.4.self_attn.k_proj.weight', 'text_model.encoder.layers.4.self_attn.v_proj.weight', 'text_model.encoder.layers.4.self_attn.out_proj.weight', 'text_model.encoder.layers.4.layer_norm2.weight', 'text_model.encoder.layers.4.layer_norm2.bias', 'text_model.encoder.layers.4.mlp.fc1.weight', 'text_model.encoder.layers.4.mlp.fc2.weight', 'text_model.encoder.layers.5.layer_norm1.weight', 'text_model.encoder.layers.5.layer_norm1.bias', 'text_model.encoder.layers.5.self_attn.q_proj.weight', 'text_model.encoder.layers.5.self_attn.k_proj.weight', 'text_model.encoder.layers.5.self_attn.v_proj.weight', 'text_model.encoder.layers.5.self_attn.out_proj.weight', 'text_model.encoder.layers.5.layer_norm2.weight', 'text_model.encoder.layers.5.layer_norm2.bias', 'text_model.encoder.layers.5.mlp.fc1.weight', 'text_model.encoder.layers.5.mlp.fc2.weight', 'text_model.encoder.layers.6.layer_norm1.weight', 'text_model.encoder.layers.6.layer_norm1.bias', 'text_model.encoder.layers.6.self_attn.q_proj.weight', 'text_model.encoder.layers.6.self_attn.k_proj.weight', 'text_model.encoder.layers.6.self_attn.v_proj.weight', 'text_model.encoder.layers.6.self_attn.out_proj.weight', 'text_model.encoder.layers.6.layer_norm2.weight', 'text_model.encoder.layers.6.layer_norm2.bias', 'text_model.encoder.layers.6.mlp.fc1.weight', 'text_model.encoder.layers.6.mlp.fc2.weight', 'text_model.encoder.layers.7.layer_norm1.weight', 'text_model.encoder.layers.7.layer_norm1.bias', 'text_model.encoder.layers.7.self_attn.q_proj.weight', 'text_model.encoder.layers.7.self_attn.k_proj.weight', 'text_model.encoder.layers.7.self_attn.v_proj.weight', 'text_model.encoder.layers.7.self_attn.out_proj.weight', 'text_model.encoder.layers.7.layer_norm2.weight', 'text_model.encoder.layers.7.layer_norm2.bias', 'text_model.encoder.layers.7.mlp.fc1.weight', 'text_model.encoder.layers.7.mlp.fc2.weight', 'text_model.encoder.layers.8.layer_norm1.weight', 'text_model.encoder.layers.8.layer_norm1.bias', 'text_model.encoder.layers.8.self_attn.q_proj.weight', 'text_model.encoder.layers.8.self_attn.k_proj.weight', 'text_model.encoder.layers.8.self_attn.v_proj.weight', 'text_model.encoder.layers.8.self_attn.out_proj.weight', 'text_model.encoder.layers.8.layer_norm2.weight', 'text_model.encoder.layers.8.layer_norm2.bias', 'text_model.encoder.layers.8.mlp.fc1.weight', 'text_model.encoder.layers.8.mlp.fc2.weight', 'text_model.encoder.layers.9.layer_norm1.weight', 'text_model.encoder.layers.9.layer_norm1.bias', 'text_model.encoder.layers.9.self_attn.q_proj.weight', 'text_model.encoder.layers.9.self_attn.k_proj.weight', 'text_model.encoder.layers.9.self_attn.v_proj.weight', 'text_model.encoder.layers.9.self_attn.out_proj.weight', 'text_model.encoder.layers.9.layer_norm2.weight', 'text_model.encoder.layers.9.layer_norm2.bias', 'text_model.encoder.layers.9.mlp.fc1.weight', 'text_model.encoder.layers.9.mlp.fc2.weight', 'text_model.encoder.layers.10.layer_norm1.weight', 'text_model.encoder.layers.10.layer_norm1.bias', 'text_model.encoder.layers.10.self_attn.q_proj.weight', 'text_model.encoder.layers.10.self_attn.k_proj.weight', 'text_model.encoder.layers.10.self_attn.v_proj.weight', 'text_model.encoder.layers.10.self_attn.out_proj.weight', 'text_model.encoder.layers.10.layer_norm2.weight', 'text_model.encoder.layers.10.layer_norm2.bias', 'text_model.encoder.layers.10.mlp.fc1.weight', 'text_model.encoder.layers.10.mlp.fc2.weight', 'text_model.encoder.layers.11.layer_norm1.weight', 'text_model.encoder.layers.11.layer_norm1.bias', 'text_model.encoder.layers.11.self_attn.q_proj.weight', 'text_model.encoder.layers.11.self_attn.k_proj.weight', 'text_model.encoder.layers.11.self_attn.v_proj.weight', 'text_model.encoder.layers.11.self_attn.out_proj.weight', 'text_model.encoder.layers.11.layer_norm2.weight', 'text_model.encoder.layers.11.layer_norm2.bias', 'text_model.encoder.layers.11.mlp.fc1.weight', 'text_model.encoder.layers.11.mlp.fc2.weight', 'text_model.final_layer_norm.weight', 'text_model.final_layer_norm.bias', 'text_projection.weight']
2025-01-17T16:45:22.582273 - Requested to load SDXLClipModel
2025-01-17T16:45:22.766275 - loaded completely 9.5367431640625e+25 358.099609375 True
2025-01-17T16:45:22.770273 - !!! Exception during processing !!! 'NoneType' object has no attribute 'device'
2025-01-17T16:45:22.772273 - Traceback (most recent call last):
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 67, in encode
    return (clip.encode_from_tokens_scheduled(tokens), )
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 148, in encode_from_tokens_scheduled
    pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 210, in encode_from_tokens
    o = self.cond_stage_model.encode_token_weights(tokens)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sdxl_clip.py", line 60, in encode_token_weights
    g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 45, in encode_token_weights
    o = self.encode(to_encode)
        ^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 252, in encode
    return self(tokens)
           ^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 224, in forward
    outputs = self.transformer(tokens, attention_mask_model, intermediate_output=self.layer_idx, final_layer_norm_intermediate=self.layer_norm_hidden_state, dtype=torch.float32)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 137, in forward
    x = self.text_model(*args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 113, in forward
    x, i = self.encoder(x, mask=mask, intermediate_output=intermediate_output)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 70, in forward
    x = l(x, mask, optimized_attention)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 51, in forward
    x += self.self_attn(self.layer_norm1(x), mask, optimized_attention)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 17, in forward
    q = self.q_proj(x)
        ^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 68, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF\ops.py", line 178, in forward_comfy_cast_weights
    out = super().forward_comfy_cast_weights(input, *args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 63, in forward_comfy_cast_weights
    weight, bias = cast_bias_weight(self, input)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 47, in cast_bias_weight
    weight = comfy.model_management.cast_to(s.weight, dtype, device, non_blocking=non_blocking, copy=has_function)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 862, in cast_to
    if device is None or weight.device == device:
                         ^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'device'

2025-01-17T16:45:22.774273 - Prompt executed in 8.83 seconds
2025-01-17T16:45:31.703009 - got prompt
2025-01-17T16:45:31.712007 - !!! Exception during processing !!! 'NoneType' object has no attribute 'device'
2025-01-17T16:45:31.714008 - Traceback (most recent call last):
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 67, in encode
    return (clip.encode_from_tokens_scheduled(tokens), )
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 148, in encode_from_tokens_scheduled
    pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 210, in encode_from_tokens
    o = self.cond_stage_model.encode_token_weights(tokens)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sdxl_clip.py", line 60, in encode_token_weights
    g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 45, in encode_token_weights
    o = self.encode(to_encode)
        ^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 252, in encode
    return self(tokens)
           ^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 224, in forward
    outputs = self.transformer(tokens, attention_mask_model, intermediate_output=self.layer_idx, final_layer_norm_intermediate=self.layer_norm_hidden_state, dtype=torch.float32)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 137, in forward
    x = self.text_model(*args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 113, in forward
    x, i = self.encoder(x, mask=mask, intermediate_output=intermediate_output)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 70, in forward
    x = l(x, mask, optimized_attention)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 51, in forward
    x += self.self_attn(self.layer_norm1(x), mask, optimized_attention)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 17, in forward
    q = self.q_proj(x)
        ^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 68, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF\ops.py", line 178, in forward_comfy_cast_weights
    out = super().forward_comfy_cast_weights(input, *args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 63, in forward_comfy_cast_weights
    weight, bias = cast_bias_weight(self, input)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 47, in cast_bias_weight
    weight = comfy.model_management.cast_to(s.weight, dtype, device, non_blocking=non_blocking, copy=has_function)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 862, in cast_to
    if device is None or weight.device == device:
                         ^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'device'

2025-01-17T16:45:31.716008 - Prompt executed in 0.01 seconds
2025-01-17T16:45:45.825737 - got prompt
2025-01-17T16:45:46.107736 - Using pytorch attention in VAE
2025-01-17T16:45:46.108736 - Using pytorch attention in VAE
2025-01-17T16:45:46.175735 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
2025-01-17T16:45:46.222737 - model_path is C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts\depth-anything/Depth-Anything-V2-Large\depth_anything_v2_vitl.pth2025-01-17T16:45:46.222737 - 
2025-01-17T16:45:46.225736 - using MLP layer as FFN
2025-01-17T16:45:53.916757 - 
ggml_sd_loader:2025-01-17T16:45:53.917679 - 
2025-01-17T16:45:53.917679 -  8                             1692025-01-17T16:45:53.917679 - 
2025-01-17T16:45:53.917679 -  0                              502025-01-17T16:45:53.917679 - 
2025-01-17T16:45:54.114688 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
2025-01-17T16:45:54.146689 - clip missing: ['text_model.embeddings.token_embedding.weight', 'text_model.embeddings.position_embedding.weight', 'text_model.encoder.layers.0.layer_norm1.weight', 'text_model.encoder.layers.0.layer_norm1.bias', 'text_model.encoder.layers.0.self_attn.q_proj.weight', 'text_model.encoder.layers.0.self_attn.k_proj.weight', 'text_model.encoder.layers.0.self_attn.v_proj.weight', 'text_model.encoder.layers.0.self_attn.out_proj.weight', 'text_model.encoder.layers.0.layer_norm2.weight', 'text_model.encoder.layers.0.layer_norm2.bias', 'text_model.encoder.layers.0.mlp.fc1.weight', 'text_model.encoder.layers.0.mlp.fc2.weight', 'text_model.encoder.layers.1.layer_norm1.weight', 'text_model.encoder.layers.1.layer_norm1.bias', 'text_model.encoder.layers.1.self_attn.q_proj.weight', 'text_model.encoder.layers.1.self_attn.k_proj.weight', 'text_model.encoder.layers.1.self_attn.v_proj.weight', 'text_model.encoder.layers.1.self_attn.out_proj.weight', 'text_model.encoder.layers.1.layer_norm2.weight', 'text_model.encoder.layers.1.layer_norm2.bias', 'text_model.encoder.layers.1.mlp.fc1.weight', 'text_model.encoder.layers.1.mlp.fc2.weight', 'text_model.encoder.layers.2.layer_norm1.weight', 'text_model.encoder.layers.2.layer_norm1.bias', 'text_model.encoder.layers.2.self_attn.q_proj.weight', 'text_model.encoder.layers.2.self_attn.k_proj.weight', 'text_model.encoder.layers.2.self_attn.v_proj.weight', 'text_model.encoder.layers.2.self_attn.out_proj.weight', 'text_model.encoder.layers.2.layer_norm2.weight', 'text_model.encoder.layers.2.layer_norm2.bias', 'text_model.encoder.layers.2.mlp.fc1.weight', 'text_model.encoder.layers.2.mlp.fc2.weight', 'text_model.encoder.layers.3.layer_norm1.weight', 'text_model.encoder.layers.3.layer_norm1.bias', 'text_model.encoder.layers.3.self_attn.q_proj.weight', 'text_model.encoder.layers.3.self_attn.k_proj.weight', 'text_model.encoder.layers.3.self_attn.v_proj.weight', 'text_model.encoder.layers.3.self_attn.out_proj.weight', 'text_model.encoder.layers.3.layer_norm2.weight', 'text_model.encoder.layers.3.layer_norm2.bias', 'text_model.encoder.layers.3.mlp.fc1.weight', 'text_model.encoder.layers.3.mlp.fc2.weight', 'text_model.encoder.layers.4.layer_norm1.weight', 'text_model.encoder.layers.4.layer_norm1.bias', 'text_model.encoder.layers.4.self_attn.q_proj.weight', 'text_model.encoder.layers.4.self_attn.k_proj.weight', 'text_model.encoder.layers.4.self_attn.v_proj.weight', 'text_model.encoder.layers.4.self_attn.out_proj.weight', 'text_model.encoder.layers.4.layer_norm2.weight', 'text_model.encoder.layers.4.layer_norm2.bias', 'text_model.encoder.layers.4.mlp.fc1.weight', 'text_model.encoder.layers.4.mlp.fc2.weight', 'text_model.encoder.layers.5.layer_norm1.weight', 'text_model.encoder.layers.5.layer_norm1.bias', 'text_model.encoder.layers.5.self_attn.q_proj.weight', 'text_model.encoder.layers.5.self_attn.k_proj.weight', 'text_model.encoder.layers.5.self_attn.v_proj.weight', 'text_model.encoder.layers.5.self_attn.out_proj.weight', 'text_model.encoder.layers.5.layer_norm2.weight', 'text_model.encoder.layers.5.layer_norm2.bias', 'text_model.encoder.layers.5.mlp.fc1.weight', 'text_model.encoder.layers.5.mlp.fc2.weight', 'text_model.encoder.layers.6.layer_norm1.weight', 'text_model.encoder.layers.6.layer_norm1.bias', 'text_model.encoder.layers.6.self_attn.q_proj.weight', 'text_model.encoder.layers.6.self_attn.k_proj.weight', 'text_model.encoder.layers.6.self_attn.v_proj.weight', 'text_model.encoder.layers.6.self_attn.out_proj.weight', 'text_model.encoder.layers.6.layer_norm2.weight', 'text_model.encoder.layers.6.layer_norm2.bias', 'text_model.encoder.layers.6.mlp.fc1.weight', 'text_model.encoder.layers.6.mlp.fc2.weight', 'text_model.encoder.layers.7.layer_norm1.weight', 'text_model.encoder.layers.7.layer_norm1.bias', 'text_model.encoder.layers.7.self_attn.q_proj.weight', 'text_model.encoder.layers.7.self_attn.k_proj.weight', 'text_model.encoder.layers.7.self_attn.v_proj.weight', 'text_model.encoder.layers.7.self_attn.out_proj.weight', 'text_model.encoder.layers.7.layer_norm2.weight', 'text_model.encoder.layers.7.layer_norm2.bias', 'text_model.encoder.layers.7.mlp.fc1.weight', 'text_model.encoder.layers.7.mlp.fc2.weight', 'text_model.encoder.layers.8.layer_norm1.weight', 'text_model.encoder.layers.8.layer_norm1.bias', 'text_model.encoder.layers.8.self_attn.q_proj.weight', 'text_model.encoder.layers.8.self_attn.k_proj.weight', 'text_model.encoder.layers.8.self_attn.v_proj.weight', 'text_model.encoder.layers.8.self_attn.out_proj.weight', 'text_model.encoder.layers.8.layer_norm2.weight', 'text_model.encoder.layers.8.layer_norm2.bias', 'text_model.encoder.layers.8.mlp.fc1.weight', 'text_model.encoder.layers.8.mlp.fc2.weight', 'text_model.encoder.layers.9.layer_norm1.weight', 'text_model.encoder.layers.9.layer_norm1.bias', 'text_model.encoder.layers.9.self_attn.q_proj.weight', 'text_model.encoder.layers.9.self_attn.k_proj.weight', 'text_model.encoder.layers.9.self_attn.v_proj.weight', 'text_model.encoder.layers.9.self_attn.out_proj.weight', 'text_model.encoder.layers.9.layer_norm2.weight', 'text_model.encoder.layers.9.layer_norm2.bias', 'text_model.encoder.layers.9.mlp.fc1.weight', 'text_model.encoder.layers.9.mlp.fc2.weight', 'text_model.encoder.layers.10.layer_norm1.weight', 'text_model.encoder.layers.10.layer_norm1.bias', 'text_model.encoder.layers.10.self_attn.q_proj.weight', 'text_model.encoder.layers.10.self_attn.k_proj.weight', 'text_model.encoder.layers.10.self_attn.v_proj.weight', 'text_model.encoder.layers.10.self_attn.out_proj.weight', 'text_model.encoder.layers.10.layer_norm2.weight', 'text_model.encoder.layers.10.layer_norm2.bias', 'text_model.encoder.layers.10.mlp.fc1.weight', 'text_model.encoder.layers.10.mlp.fc2.weight', 'text_model.encoder.layers.11.layer_norm1.weight', 'text_model.encoder.layers.11.layer_norm1.bias', 'text_model.encoder.layers.11.self_attn.q_proj.weight', 'text_model.encoder.layers.11.self_attn.k_proj.weight', 'text_model.encoder.layers.11.self_attn.v_proj.weight', 'text_model.encoder.layers.11.self_attn.out_proj.weight', 'text_model.encoder.layers.11.layer_norm2.weight', 'text_model.encoder.layers.11.layer_norm2.bias', 'text_model.encoder.layers.11.mlp.fc1.weight', 'text_model.encoder.layers.11.mlp.fc2.weight', 'text_model.final_layer_norm.weight', 'text_model.final_layer_norm.bias', 'text_projection.weight']
2025-01-17T16:45:54.159688 - Requested to load SDXLClipModel
2025-01-17T16:45:54.339688 - loaded completely 9.5367431640625e+25 358.099609375 True
2025-01-17T16:45:54.344688 - !!! Exception during processing !!! 'NoneType' object has no attribute 'device'
2025-01-17T16:45:54.346688 - Traceback (most recent call last):
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 67, in encode
    return (clip.encode_from_tokens_scheduled(tokens), )
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 148, in encode_from_tokens_scheduled
    pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 210, in encode_from_tokens
    o = self.cond_stage_model.encode_token_weights(tokens)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sdxl_clip.py", line 60, in encode_token_weights
    g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 45, in encode_token_weights
    o = self.encode(to_encode)
        ^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 252, in encode
    return self(tokens)
           ^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 224, in forward
    outputs = self.transformer(tokens, attention_mask_model, intermediate_output=self.layer_idx, final_layer_norm_intermediate=self.layer_norm_hidden_state, dtype=torch.float32)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 137, in forward
    x = self.text_model(*args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 113, in forward
    x, i = self.encoder(x, mask=mask, intermediate_output=intermediate_output)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 70, in forward
    x = l(x, mask, optimized_attention)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 51, in forward
    x += self.self_attn(self.layer_norm1(x), mask, optimized_attention)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 17, in forward
    q = self.q_proj(x)
        ^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 68, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF\ops.py", line 178, in forward_comfy_cast_weights
    out = super().forward_comfy_cast_weights(input, *args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 63, in forward_comfy_cast_weights
    weight, bias = cast_bias_weight(self, input)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 47, in cast_bias_weight
    weight = comfy.model_management.cast_to(s.weight, dtype, device, non_blocking=non_blocking, copy=has_function)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 862, in cast_to
    if device is None or weight.device == device:
                         ^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'device'

2025-01-17T16:45:54.349688 - Prompt executed in 8.52 seconds
2025-01-17T16:46:10.580580 - got prompt
2025-01-17T16:46:10.590582 - !!! Exception during processing !!! 'NoneType' object has no attribute 'device'
2025-01-17T16:46:10.591580 - Traceback (most recent call last):
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 67, in encode
    return (clip.encode_from_tokens_scheduled(tokens), )
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 148, in encode_from_tokens_scheduled
    pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 210, in encode_from_tokens
    o = self.cond_stage_model.encode_token_weights(tokens)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sdxl_clip.py", line 60, in encode_token_weights
    g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 45, in encode_token_weights
    o = self.encode(to_encode)
        ^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 252, in encode
    return self(tokens)
           ^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 224, in forward
    outputs = self.transformer(tokens, attention_mask_model, intermediate_output=self.layer_idx, final_layer_norm_intermediate=self.layer_norm_hidden_state, dtype=torch.float32)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 137, in forward
    x = self.text_model(*args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 113, in forward
    x, i = self.encoder(x, mask=mask, intermediate_output=intermediate_output)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 70, in forward
    x = l(x, mask, optimized_attention)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 51, in forward
    x += self.self_attn(self.layer_norm1(x), mask, optimized_attention)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 17, in forward
    q = self.q_proj(x)
        ^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 68, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF\ops.py", line 178, in forward_comfy_cast_weights
    out = super().forward_comfy_cast_weights(input, *args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 63, in forward_comfy_cast_weights
    weight, bias = cast_bias_weight(self, input)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 47, in cast_bias_weight
    weight = comfy.model_management.cast_to(s.weight, dtype, device, non_blocking=non_blocking, copy=has_function)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 862, in cast_to
    if device is None or weight.device == device:
                         ^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'device'

2025-01-17T16:46:10.594583 - Prompt executed in 0.01 seconds
2025-01-17T16:46:29.979160 - FETCH DATA from: C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager\extension-node-map.json2025-01-17T16:46:29.979160 - 2025-01-17T16:46:29.984692 -  [DONE]
2025-01-17T16:46:30.204981 - []2025-01-17T16:46:30.204981 - 
2025-01-17T16:46:30.204981 - []2025-01-17T16:46:30.205981 - 
2025-01-17T16:46:32.300097 - got prompt
2025-01-17T16:46:32.310096 - !!! Exception during processing !!! 'NoneType' object has no attribute 'device'
2025-01-17T16:46:32.311097 - Traceback (most recent call last):
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 67, in encode
    return (clip.encode_from_tokens_scheduled(tokens), )
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 148, in encode_from_tokens_scheduled
    pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 210, in encode_from_tokens
    o = self.cond_stage_model.encode_token_weights(tokens)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sdxl_clip.py", line 60, in encode_token_weights
    g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 45, in encode_token_weights
    o = self.encode(to_encode)
        ^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 252, in encode
    return self(tokens)
           ^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 224, in forward
    outputs = self.transformer(tokens, attention_mask_model, intermediate_output=self.layer_idx, final_layer_norm_intermediate=self.layer_norm_hidden_state, dtype=torch.float32)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 137, in forward
    x = self.text_model(*args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 113, in forward
    x, i = self.encoder(x, mask=mask, intermediate_output=intermediate_output)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 70, in forward
    x = l(x, mask, optimized_attention)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 51, in forward
    x += self.self_attn(self.layer_norm1(x), mask, optimized_attention)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 17, in forward
    q = self.q_proj(x)
        ^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 68, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF\ops.py", line 178, in forward_comfy_cast_weights
    out = super().forward_comfy_cast_weights(input, *args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 63, in forward_comfy_cast_weights
    weight, bias = cast_bias_weight(self, input)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 47, in cast_bias_weight
    weight = comfy.model_management.cast_to(s.weight, dtype, device, non_blocking=non_blocking, copy=has_function)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 862, in cast_to
    if device is None or weight.device == device:
                         ^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'device'

2025-01-17T16:46:32.314096 - Prompt executed in 0.01 seconds
2025-01-17T16:46:40.673136 - got prompt
2025-01-17T16:46:40.682135 - !!! Exception during processing !!! 'NoneType' object has no attribute 'device'
2025-01-17T16:46:40.684135 - Traceback (most recent call last):
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 67, in encode
    return (clip.encode_from_tokens_scheduled(tokens), )
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 148, in encode_from_tokens_scheduled
    pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 210, in encode_from_tokens
    o = self.cond_stage_model.encode_token_weights(tokens)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sdxl_clip.py", line 60, in encode_token_weights
    g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 45, in encode_token_weights
    o = self.encode(to_encode)
        ^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 252, in encode
    return self(tokens)
           ^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 224, in forward
    outputs = self.transformer(tokens, attention_mask_model, intermediate_output=self.layer_idx, final_layer_norm_intermediate=self.layer_norm_hidden_state, dtype=torch.float32)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 137, in forward
    x = self.text_model(*args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 113, in forward
    x, i = self.encoder(x, mask=mask, intermediate_output=intermediate_output)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 70, in forward
    x = l(x, mask, optimized_attention)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 51, in forward
    x += self.self_attn(self.layer_norm1(x), mask, optimized_attention)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 17, in forward
    q = self.q_proj(x)
        ^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 68, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF\ops.py", line 178, in forward_comfy_cast_weights
    out = super().forward_comfy_cast_weights(input, *args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 63, in forward_comfy_cast_weights
    weight, bias = cast_bias_weight(self, input)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 47, in cast_bias_weight
    weight = comfy.model_management.cast_to(s.weight, dtype, device, non_blocking=non_blocking, copy=has_function)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 862, in cast_to
    if device is None or weight.device == device:
                         ^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'device'

2025-01-17T16:46:40.686136 - Prompt executed in 0.01 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":53,"last_link_id":82,"nodes":[{"id":35,"type":"CLIPTextEncode","pos":[-282.43072509765625,-1236.619140625],"size":[422.84503173828125,164.31304931640625],"flags":{"collapsed":true},"order":14,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":54},{"name":"text","type":"STRING","link":34,"widget":{"name":"text"}}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[50],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["beautiful scenery nature glass bottle landscape, , purple galaxy bottle,"],"color":"#232","bgcolor":"#353"},{"id":31,"type":"Text Concatenate","pos":[-711.2109375,-1453.687255859375],"size":[315,178],"flags":{},"order":11,"mode":0,"inputs":[{"name":"text_a","type":"STRING","link":30,"widget":{"name":"text_a"},"shape":7},{"name":"text_b","type":"STRING","link":31,"widget":{"name":"text_b"},"shape":7},{"name":"text_c","type":"STRING","link":null,"widget":{"name":"text_c"},"shape":7},{"name":"text_d","type":"STRING","link":null,"widget":{"name":"text_d"},"shape":7}],"outputs":[{"name":"STRING","type":"STRING","links":[34]}],"properties":{"Node name for S&R":"Text Concatenate"},"widgets_values":[", ","true","","","",""]},{"id":33,"type":"EmptyLatentImage","pos":[261.4167785644531,-1071.1583251953125],"size":[315,106],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"LATENT","type":"LATENT","links":[38],"slot_index":0}],"properties":{"Node name for S&R":"EmptyLatentImage"},"widgets_values":[1024,1024,1]},{"id":30,"type":"Prompt Multiple Styles Selector","pos":[-1188.3936767578125,-1088.9046630859375],"size":[390.5999755859375,150],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"positive_string","type":"STRING","links":[31],"slot_index":0},{"name":"negative_string","type":"STRING","links":null}],"properties":{"Node name for S&R":"Prompt Multiple Styles Selector"},"widgets_values":["No Style","No Style","No Style","No Style"]},{"id":39,"type":"FluxGuidance","pos":[447.15997314453125,-1300.3538818359375],"size":[317.4000244140625,58],"flags":{},"order":16,"mode":0,"inputs":[{"name":"conditioning","type":"CONDITIONING","link":52}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[46],"slot_index":0}],"properties":{"Node name for S&R":"FluxGuidance"},"widgets_values":[3.5]},{"id":24,"type":"ControlNetLoader","pos":[-673.463134765625,-1627.462890625],"size":[315,58],"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"CONTROL_NET","type":"CONTROL_NET","links":[25],"slot_index":0}],"properties":{"Node name for S&R":"ControlNetLoader"},"widgets_values":["FLUX.1\\InstantX-FLUX1-Dev-Union\\diffusion_pytorch_model.safetensors"]},{"id":38,"type":"SaveImage","pos":[1483.8365478515625,-1199.3660888671875],"size":[210,270],"flags":{},"order":19,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":41}],"outputs":[],"properties":{"Node name for S&R":"SaveImage"},"widgets_values":["ComfyUI"]},{"id":46,"type":"PreviewImage","pos":[164.40362548828125,-618.3736572265625],"size":[290.0572509765625,307.89300537109375],"flags":{},"order":13,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":79}],"outputs":[],"properties":{"Node name for S&R":"PreviewImage"},"widgets_values":[]},{"id":20,"type":"SetUnionControlNetType","pos":[-298.3676452636719,-1626.533447265625],"size":[315,58],"flags":{},"order":8,"mode":0,"inputs":[{"name":"control_net","type":"CONTROL_NET","link":25}],"outputs":[{"name":"CONTROL_NET","type":"CONTROL_NET","links":[23],"slot_index":0}],"properties":{"Node name for S&R":"SetUnionControlNetType"},"widgets_values":["canny/lineart/anime_lineart/mlsd"]},{"id":40,"type":"LoraLoader","pos":[-824.0996704101562,-876.3613891601562],"size":[315,126],"flags":{"collapsed":false},"order":9,"mode":4,"inputs":[{"name":"model","type":"MODEL","link":72},{"name":"clip","type":"CLIP","link":73}],"outputs":[{"name":"MODEL","type":"MODEL","links":[57],"slot_index":0},{"name":"CLIP","type":"CLIP","links":[54,55],"slot_index":1}],"properties":{"Node name for S&R":"LoraLoader"},"widgets_values":["realism_lora_comfy_converted.safetensors",1,1]},{"id":50,"type":"UnetLoaderGGUF","pos":[-1411.489990234375,-872.56103515625],"size":[315,58],"flags":{},"order":3,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[72],"slot_index":0}],"properties":{"Node name for S&R":"UnetLoaderGGUF"},"widgets_values":["flux1-dev-Q8_0.gguf"]},{"id":51,"type":"DualCLIPLoaderGGUF","pos":[-1493.71728515625,-755.093505859375],"size":[315,106],"flags":{},"order":4,"mode":0,"inputs":[],"outputs":[{"name":"CLIP","type":"CLIP","links":[73],"slot_index":0}],"properties":{"Node name for S&R":"DualCLIPLoaderGGUF"},"widgets_values":["t5\\t5-v1_1-xxl-encoder-Q8_0.gguf","clip_l.safetensors","sdxl"]},{"id":49,"type":"VAELoader","pos":[-1429.1104736328125,-537.7783203125],"size":[315,58],"flags":{},"order":5,"mode":0,"inputs":[],"outputs":[{"name":"VAE","type":"VAE","links":[75,76],"slot_index":0}],"properties":{"Node name for S&R":"VAELoader"},"widgets_values":["FLUX1\\ae.safetensors"]},{"id":37,"type":"VAEDecode","pos":[1230.182373046875,-951.74853515625],"size":[210,46],"flags":{},"order":18,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":39},{"name":"vae","type":"VAE","link":76}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[41],"slot_index":0}],"properties":{"Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":23,"type":"ControlNetApplyAdvanced","pos":[76.75977325439453,-1401.201904296875],"size":[315,186],"flags":{},"order":15,"mode":0,"inputs":[{"name":"positive","type":"CONDITIONING","link":50},{"name":"negative","type":"CONDITIONING","link":51},{"name":"control_net","type":"CONTROL_NET","link":23},{"name":"image","type":"IMAGE","link":82},{"name":"vae","type":"VAE","link":75,"shape":7}],"outputs":[{"name":"positive","type":"CONDITIONING","links":[52],"slot_index":0},{"name":"negative","type":"CONDITIONING","links":[53],"slot_index":1}],"properties":{"Node name for S&R":"ControlNetApplyAdvanced"},"widgets_values":[0.5,0,1]},{"id":1,"type":"LoadImage","pos":[-775.8829956054688,-619.6614379882812],"size":[315,314],"flags":{},"order":6,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[81],"slot_index":0},{"name":"MASK","type":"MASK","links":null,"slot_index":1}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["First_00093_.png","image"]},{"id":53,"type":"AIO_Preprocessor","pos":[-280.1860656738281,-547.3314208984375],"size":[315,82],"flags":{},"order":10,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":81}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[79,82],"slot_index":0}],"properties":{"Node name for S&R":"AIO_Preprocessor"},"widgets_values":["DepthAnythingV2Preprocessor",512]},{"id":29,"type":"easy prompt","pos":[-1299.70947265625,-1426.977294921875],"size":[497.5123596191406,269.93310546875],"flags":{},"order":7,"mode":0,"inputs":[],"outputs":[{"name":"prompt","type":"STRING","links":[30],"slot_index":0}],"properties":{"Node name for S&R":"easy prompt"},"widgets_values":["In the vast, star-studded expanse of space, a spaceship moves steadily toward a featureless cylinder made of dark metal. The cylinder is 50 kilometer in length and has a diameter of 20 kilometer. The spaceship's trajectory is set, with its nose pointed directly at the center of the massive cylinder. The surface of the cylinder is metallic and reflective, giving it a stark, otherworldly appearance under the dim light of distant stars. There are no visible entrances or openings from the outside, adding to the intrigue of the unknown interior. The spaceship's lights cast a soft glow on the cylinder's surface, highlighting its immense size and intricate details. The scene is filled with a sense of awe and anticipation, as the crew inside the spaceship braces for their historic encounter. ck-70scf","none","none"]},{"id":36,"type":"KSampler","pos":[807.0029296875,-1162.7083740234375],"size":[315,262],"flags":{},"order":17,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":57},{"name":"positive","type":"CONDITIONING","link":46},{"name":"negative","type":"CONDITIONING","link":53},{"name":"latent_image","type":"LATENT","link":38}],"outputs":[{"name":"LATENT","type":"LATENT","links":[39],"slot_index":0}],"properties":{"Node name for S&R":"KSampler"},"widgets_values":[125656837553492,"fixed",20,1,"dpm_adaptive","normal",0.9500000000000001]},{"id":34,"type":"CLIPTextEncode","pos":[-282.2213134765625,-1177.0869140625],"size":[400,200],"flags":{"collapsed":true},"order":12,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":55}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[51],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["adf"],"color":"#322","bgcolor":"#533"}],"links":[[23,20,0,23,2,"CONTROL_NET"],[25,24,0,20,0,"CONTROL_NET"],[30,29,0,31,0,"STRING"],[31,30,0,31,1,"STRING"],[34,31,0,35,1,"STRING"],[38,33,0,36,3,"LATENT"],[39,36,0,37,0,"LATENT"],[41,37,0,38,0,"IMAGE"],[46,39,0,36,1,"CONDITIONING"],[50,35,0,23,0,"CONDITIONING"],[51,34,0,23,1,"CONDITIONING"],[52,23,0,39,0,"CONDITIONING"],[53,23,1,36,2,"CONDITIONING"],[54,40,1,35,0,"CLIP"],[55,40,1,34,0,"CLIP"],[57,40,0,36,0,"MODEL"],[72,50,0,40,0,"MODEL"],[73,51,0,40,1,"CLIP"],[75,49,0,23,4,"VAE"],[76,49,0,37,1,"VAE"],[79,53,0,46,0,"IMAGE"],[81,1,0,53,0,"IMAGE"],[82,53,0,23,3,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.7627768444385493,"offset":[1624.7513004476923,1634.5525904381602]},"node_versions":{"comfy-core":"0.3.10","pr-was-node-suite-comfyui-47064894":"1.0.2","ComfyUI-GGUF":"5875c52f59baca3a9372d68c43a3775e21846fe0","comfyui_controlnet_aux":"1.0.5","comfyui-easy-use":"1.2.6"},"ue_links":[]},"version":0.4}

Additional Context

(Please add any additional context or steps to reproduce the error here)

@spyderroque
Copy link

OK, I found the problem in my case. The "type" field in the Dual clip Loader had a wrng entry. Instead of Flux it said for some reason SDXL.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
User Support A user needs help with something, probably not a bug.
Projects
None yet
Development

No branches or pull requests

4 participants