stablediffusion吧 关注:33,001贴子:117,072
  • 52回复贴,共1

打造使用comfyui过程中所遇各类问题[lbk]最强贴[rbk]

只看楼主收藏回复

大家在使用comfyui过程中遇到的各种困惑和问题,可以通过提供、说明详细的线索信息,进而帮助分析、识别和解决。


IP属地:北京来自Android客户端1楼2025-04-16 01:56回复
    为了装双截棍结果更新了python到3.11和pytorch2.8.0,结果不但双截棍用不了,好多之前的插件也用不了,比如xlab,提示入下,大哥能给看看不?


    IP属地:陕西2楼2025-04-16 13:31
    回复
      广告
      立即查看
      Error message occurred while importing the 'x-flux-comfyui' module.
      Traceback (most recent call last):
      File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\utils\import_utils.py", line 1778, in _get_module
      return importlib.import_module("." + module_name, self.__name__)
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "E:\ComfyUI-aki-v1.4\python\Lib\importlib\__init__.py", line 126, in import_module
      return _bootstrap._gcd_import(name[level:], package, level)
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
      File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
      File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
      File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
      File "<frozen importlib._bootstrap_external>", line 940, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\models\clip\modeling_clip.py", line 45, in <module>
      from ...modeling_flash_attention_utils import _flash_attention_forward
      File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\modeling_flash_attention_utils.py", line 27, in <module>
      from flash_attn.bert_padding import index_first_axis, pad_input, unpad_input # noqa
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\flash_attn-2.7.4.post1-py3.11-win-amd64.egg\flash_attn\__init__.py", line 3, in <module>
      from flash_attn.flash_attn_interface import (
      File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\flash_attn-2.7.4.post1-py3.11-win-amd64.egg\flash_attn\flash_attn_interface.py", line 15, in <module>
      import flash_attn_2_cuda as flash_attn_gpu
      ImportError: DLL load failed while importing flash_attn_2_cuda: 找不到指定的程序。
      The above exception was the direct cause of the following exception:
      Traceback (most recent call last):
      File "E:\ComfyUI-aki-v1.4\nodes.py", line 2153, in load_custom_node
      module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 940, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "E:\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui\__init__.py", line 1, in <module>
      from .nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
      File "E:\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui\nodes.py", line 17, in <module>
      from .xflux.src.flux.util import (configs, load_ae, load_clip,
      File "E:\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui\xflux\src\flux\util.py", line 16, in <module>
      from .modules.conditioner import HFEmbedder
      File "E:\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui\xflux\src\flux\modules\conditioner.py", line 2, in <module>
      from transformers import (CLIPTextModel, CLIPTokenizer, T5EncoderModel,
      File "<frozen importlib._bootstrap>", line 1229, in _handle_fromlist
      File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\utils\import_utils.py", line 1767, in __getattr__
      value = getattr(module, name)
      ^^^^^^^^^^^^^^^^^^^^^
      File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\utils\import_utils.py", line 1766, in __getattr__
      module = self._get_module(self._class_to_module[name])
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\utils\import_utils.py", line 1780, in _get_module
      raise RuntimeError(
      RuntimeError: Failed to import transformers.models.clip.modeling_clip because of the following error (look up to see its traceback):
      DLL load failed while importing flash_attn_2_cuda: 找不到指定的程序。


      IP属地:陕西3楼2025-04-16 13:31
      收起回复
        已经安装成功nunchaku,且成功出图,运行了别的工作流后,再次运行nunchaku文生图工作流,节点爆红失效,然后卸载多次,也无法再次安装节点,怎么回事阿,麻烦楼主看看


        IP属地:辽宁4楼2025-04-17 08:56
        收起回复
          兄弟按照你说的降级pytorch2.6了,结果


          IP属地:陕西5楼2025-04-17 10:04
          回复
            File "E:\ComfyUI-aki-v1.4\custom_nodes\comfyui_slk_joy_caption_two\joy_caption_two_node.py", line 407, in generate generate_ids = text_model.generate(input_ids, inputs_embeds=input_embeds, attention_mask=attention_mask, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\generation\utils.py", line 2215, in generate result = self._sample( ^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\generation\utils.py", line 3206, in _sample outputs = self(**model_inputs, return_dict=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1740, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\models\llama\modeling_llama.py", line 1190, in forward outputs = self.model( ^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1740, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\models\llama\modeling_llama.py", line 945, in forward layer_outputs = decoder_layer( ^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1740, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\models\llama\modeling_llama.py", line 676, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( ^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1740, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\transformers\models\llama\modeling_llama.py", line 559, in forward query_states = self.q_proj(hidden_states) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1740, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\peft\tuners\lora\bnb.py", line 467, in forward result = self.base_layer(x, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1740, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\bitsandbytes\nn\modules.py", line 484, in forward return bnb.matmul_4bit(x, self.weight.t(), bias=bias, quant_state=self.weight.quant_state).to(inp_dtype) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\bitsandbytes\autograd\_functions.py", line 533, in matmul_4bit return MatMul4Bit.apply(A, B, out, bias, quant_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\torch\autograd\function.py", line 575, in apply return super().apply(*args, **kwargs) # type: ignore[misc] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\bitsandbytes\autograd\_functions.py", line 462, in forward output = torch.nn.functional.linear(A, F.dequantize_4bit(B, quant_state).to(A.dtype).t(), bias) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\bitsandbytes\functional.py", line 1352, in dequantize_4bit absmax = dequantize_blockwise(quant_state.absmax, quant_state.state2) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\bitsandbytes\functional.py", line 1043, in dequantize_blockwise lib.cdequantize_blockwise_fp32(*args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\site-packages\bitsandbytes\cextension.py", line 46, in __getattr__ return getattr(self._lib, item) ^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\ctypes\__init__.py", line 389, in __getattr__ func = self.__getitem__(name) ^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki-v1.4\python\Lib\ctypes\__init__.py", line 394, in __getitem__ func = self._FuncPtr((name_or_ordinal, self)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^AttributeError: function 'cdequantize_blockwise_fp32' not found2025-04-17T10:03:29.386108 - Prompt executed in 0.46 seconds```## Attached WorkflowPlease make sure that workflow does not contain any sensitive information such as API keys or passwords.```Workflow too large. Please manually upload the workflow from local file system.```## Additional Context(Please add any additional context or steps to reproduce the error here)


            IP属地:陕西6楼2025-04-17 10:04
            收起回复
              # ComfyUI Error Report## Error Details- **Node ID:** 7- **Node Type:** CLIPTextEncode- **Exception Type:** RuntimeError- **Exception Message:** ERROR: clip input is invalid: NoneIf the clip is from a checkpoint loader node your checkpoint does not contain a valid clip or text encoder model.## Stack Trace``` File "E:\1\ComfyUI-aki-v1.4\execution.py", line 345, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "E:\1\ComfyUI-aki-v1.4\execution.py", line 220, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "E:\1\ComfyUI-aki-v1.4\execution.py", line 192, in _map_node_over_list process_inputs(input_dict, i) File "E:\1\ComfyUI-aki-v1.4\execution.py", line 181, in process_inputs results.append(getattr(obj, func)(**inputs)) File "E:\1\ComfyUI-aki-v1.4\nodes.py", line 67, in encode raise RuntimeError("ERROR: clip input is invalid: None\n\nIf the clip is from a checkpoint loader node your checkpoint does not contain a valid clip or text encoder model.")```## System Information- **ComfyUI Version:** 0.3.29- **Arguments:** E:\1\ComfyUI-aki-v1.4\main.py --auto-launch --preview-method auto --disable-cuda-malloc- **OS:** nt- **Python Version:** 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]- **Embedded Python:** false- **PyTorch Version:** 2.3.1+cu121## Devices- **Name:** cuda:0 NVIDIA GeForce RTX 4070 Ti SUPER : cudaMallocAsync - **Type:** cuda - **VRAM Total:** 17170825216 - **VRAM Free:** 15821963264 - **Torch VRAM Total:** 0 - **Torch VRAM Free:** 0## Logs```2025-04-19T00:17:17.802883 - [START] Security scan2025-04-19T00:17:17.802883 - 2025-04-19T00:17:21.072363 - [DONE] Security scan2025-04-19T00:17:21.072363 - 2025-04-19T00:17:21.217797 - ## ComfyUI-Manager: installing dependencies done.2025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - ** ComfyUI startup time:2025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - 2025-04-19 00:17:21.2177972025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - ** Platform:2025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - Windows2025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - ** Python version:2025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]2025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - ** Python executable:2025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - E:\1\ComfyUI-aki-v1.4\python\python.exe2025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - ** ComfyUI Path:2025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - E:\1\ComfyUI-aki-v1.42025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - ** Log path:2025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.217797 - E:\1\ComfyUI-aki-v1.4\comfyui.log2025-04-19T00:17:21.217797 - 2025-04-19T00:17:21.220791 - Prestartup times for custom nodes:2025-04-19T00:17:21.220791 - 0.0 seconds: E:\1\ComfyUI-aki-v1.4\custom_nodes\rgthree-comfy2025-04-19T00:17:21.220791 - 0.0 seconds: E:\1\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Marigold2025-04-19T00:17:21.220791 - 3.4 seconds: E:\1\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Manager2025-04-19T00:17:21.226771 - 2025-04-19T00:17:22.494286 - Warning, you are using an old pytorch version and some ckpt/pt files might be loaded unsafely. Upgrading to 2.4 or above is recommended.2025-04-19T00:17:22.656335 - Total VRAM 16375 MB, total RAM 65362 MB2025-04-19T00:17:22.657330 - pytorch version: 2.3.1+cu1212025-04-19T00:17:23.718809 - xformers version: 0.0.272025-04-19T00:17:23.718809 - Set vram state to: NORMAL_VRAM2025-04-19T00:17:23.718809 - Device: cuda:0 NVIDIA GeForce RTX 4070 Ti SUPER : cudaMallocAsync2025-04-19T00:17:25.396817 - Using xformers attention2025-04-19T00:17:26.093722 - Python version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]2025-04-19T00:17:26.093722 - ComfyUI version: 0.3.292025-04-19T00:17:26.177442 - ComfyUI frontend version: 1.16.82025-04-19T00:17:26.179432 - [Prompt Server] web root: E:\1\ComfyUI-aki-v1.4\python\lib\site-packages\comfyui_frontend_package\static2025-04-19T00:17:26.725086 - [AnimateDiffEvo] - [0;31mERROR[0m - No motion models found. Please download one and place in:


              IP属地:安徽7楼2025-04-19 00:23
              收起回复
                Traceback (most recent call last):
                File "H:\confyUI\ComfyUI-aki-v1.6\ComfyUI\execution.py", line 345, in execute
                output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "H:\confyUI\ComfyUI-aki-v1.6\ComfyUI\execution.py", line 220, in get_output_data
                return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "H:\confyUI\ComfyUI-aki-v1.6\ComfyUI\execution.py", line 192, in _map_node_over_list
                process_inputs(input_dict, i)
                File "H:\confyUI\ComfyUI-aki-v1.6\ComfyUI\execution.py", line 181, in process_inputs
                results.append(getattr(obj, func)(**inputs))
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "H:\confyUI\ComfyUI-aki-v1.6\ComfyUI\custom_nodes\comfyui-mimicmotionwrapper\nodes.py", line 400, in process
                snapshot_download(repo_id="hr16/yolox-onnx",
                File "H:\confyUI\ComfyUI-aki-v1.6\python\Lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
                return fn(*args, **kwargs)
                ^^^^^^^^^^^^^^^^^^^
                File "H:\confyUI\ComfyUI-aki-v1.6\python\Lib\site-packages\huggingface_hub\_snapshot_download.py", line 235, in snapshot_download
                raise LocalEntryNotFoundError(
                huggingface_hub.errors.LocalEntryNotFoundError: An error happened while trying to locate the files on the Hub and we cannot find the appropriate snapshot folder for the specified revision on the local disk. Please check your internet connection and try again.
                Prompt executed in 21.05 seconds


                IP属地:山西8楼2025-04-22 23:04
                收起回复
                  广告
                  立即查看
                  安装完nunchaku插件以及对应版本的轮子依赖后就报有个过时version版本被安装与impact pack不兼容,这种有大的影响吗?应该如何解决?


                  IP属地:江西9楼2025-04-23 02:01
                  收起回复
                    一运行就报这个


                    IP属地:北京10楼2025-04-23 17:00
                    回复


                      IP属地:北京11楼2025-04-23 17:00
                      收起回复
                        An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your lnternet connection is on. 使用controlnet模型时出错了,请问一下如何解决


                        IP属地:江苏来自Android客户端12楼2025-05-03 06:23
                        收起回复
                          1、提示词长度问题。
                          Token indices sequence length is longer than the specified maximum sequence length for this model (328>77).Running this sequence through the model will result in indexing errors.
                          这个实测下来超77部分的提示词还是会反映到生成图片中,那这条警告可以不用管吗,还是最好把文本编码器换成A1111那种模式?但flux是不是没法用A1111模式呀?
                          2、cn模型的下载和读取问题。
                          Failed to find S:\StableDiffusion\ComfyUI-aki-v1.6\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts\lllyasviel/Annotators\body_pose_model.pth.
                          Downloading from huggingface.co
                          cacher folder is C:\Users\AppData\Local\Temp, you can change it by custom_tmp_path in config.yaml
                          S:\StableDiffusion\ComfyUI-aki-v1.6\python\Lib\site-packages\huggingface_hub\file_download.py:832: UserWarning: `local_dir_use_symlinks` parameter is deprecated and will be ignored. The process to download files to a local folder has been updated and do not rely on symlinks anymore. You only need to pass a destination folder as`local_dir`.
                          For more details, check out https://huggingface.co/docs/huggingface_hub/main/en/guides/download#download-files-to-local-folder.
                          翻译成中文也没懂什么意思,只知道它提示我缺模型,所以要在线下载。因为我之前用的是webui,从webui转comfyui,模型读取路径用的是webui,但我发现cn模型似乎无法从webui文件夹里读取,所以每次用cn都提示我要下载cn模型。但我没开v,所以每次下载都失败,失败之后它又能自己生成骨骼图,我都不知道它从哪里获取的模型。现在每次用cn,等它下载失败要等很久,后续生成倒是没问题,不知道要怎么解决这个问题


                          IP属地:浙江13楼2025-05-03 12:32
                          收起回复