DualCLIPLoader

The DualCLIPLoader node enables advanced workflows by loading two CLIP models simultaneously in ComfyUI. This is ideal for model blending, comparative analysis, or feature fusion in next-gen AI art projects and research pipelines, where leveraging the strengths of different CLIP models can unlock new creative and technical possibilities.

Overview

DualCLIPLoader loads, initializes, and exposes two CLIP (Contrastive Language–Image Pretraining) models. It’s most commonly used for tasks that benefit from comparing or integrating multiple text-image embeddings, such as style combination, domain transfer, or multi-feature-guided image synthesis. Its outputs can be routed to subsequent samplers, decoders, prompt-combiners, or CLIP-interactive nodes.

Visual Example

DualCLIPLoader Comfyui node
Figure 1 - DualCLIPLoader Comfyui node

Official Documentation Link

https://comfyui-wiki.com/en/comfyui-nodes/advanced/loaders/dual-clip-loader

Inputs

Parameter Data Type Input Method Default
clip_name1 String Dropdown or text input (model name) clip_l.safetensors
clip_name2 String Dropdown or text input (model name) -
type Option Dropdown (sdxl, sd3, flux) sdxl
device String Dropdown or text input (cpu/cuda/mps) cuda

Outputs

Output Name Data Type Description
CLIP Object Reference to both loaded CLIP models for downstream use

Usage Instructions

Drag DualCLIPLoader from the node browser into your graph/workspace. Select CLIP model weights for de>clip_name1 and de>clip_name2 (ensure available in de>models/clip/ directory). Choose the model de>type best suited for your workflow (e.g. "sdxl" for Stable Diffusion XL). Set the de>device to match your hardware (e.g. "cuda" for NVIDIA GPUs). Connect the CLIP output to samplers, prompt-processing nodes, or other modules requiring CLIP context. Run the workflow. The node will load both CLIP models and make them available for the generation process.

Advanced Usage

Mix features from different CLIP architectures (e.g. SDXL + FLUX) for diverse prompt embedding strategies. Use for A/B testing or ensemble pipelines—run multiple heads, then compare sampler outputs using both models. Chain with other advanced loader or prompt fusion nodes for multi-modal pipelines. Optimize with quantized (`.gguf`) CLIP models (see GGUF DualCLIP Loader alternatives below).

Example JSON for API or Workflow Export

{
  "id": "dual_clip_loader_1",
  "type": "DualCLIPLoader",
  "inputs": {
    "clip_name1": "clip_l.safetensors",
    "clip_name2": "flux_clip_l.safetensors",
    "type": "sdxl",
    "device": "cuda"
  }
}

Tips

  • Match CLIP model selection to pipeline/model type (SDXL with SDXL models, etc) for best compatibility.
  • For memory-limited hardware, consider using quantized CLIP models in .gguf format.
  • Place all desired CLIP model weights in the models/clip/ folder for easy selection.
  • Monitor VRAM usage when using two large CLIP models in parallel.

How It Works (Technical)

DualCLIPLoader locates and loads two distinct CLIP model files based on the input configuration. Each model is allocated on the specified device using the appropriate backend (PyTorch, etc). Output is typically a tuple or combined handle of the active CLIP models, allowing downstream nodes to use either, both, or an ensemble for prompt embedding or text-to-image alignment tasks.

Github Alternatives

  • ComfyUI-GGUF (GGUF DualCLIPLoader) – Supports loading two quantized CLIP models in GGUF format for reduced VRAM and improved speed.
  • ComfyUI-Inspire-Pack – Includes "Shared Text Encoder Loader" nodes: compatible with DualCLIPLoader and TripleCLIPLoader for flexible multi-CLIP model loading.
  • ComfyUI-SmartModelLoaders-MXD – Unified model loader node pack for standard and GGUF models, with combined CLIP, text, and UNet loaders.

Videcool workflows

The DualCLIPLoader node is used in the following Videcool workflows:

FAQ

1. Can I use different types of CLIP models (e.g., SDXL and FLUX) together?
Yes, provided their architectures are compatible with the workflow and downstream nodes.

2. What happens if one model fails to load?
The node will typically throw an error; ensure both model files are valid and accessible.

3. Do I need "DualCLIPLoaderGGUF" for .gguf quantized models?
Yes, use the GGUF-specific node or compatible alternative for quantized models.

Common Mistakes and Troubleshooting

Model files may be missing or not placed in the de>models/clip/ directory. Mismatched device settings (e.g., "cuda" without CUDA-enabled GPU) can cause issues. Mixing incompatible CLIP architectures should be avoided; always check for workflow compatibility. High VRAM/RAM usage with large models can be addressed by trying quantized alternatives.

Conclusion

DualCLIPLoader is a core utility for advanced prompt engineering, model comparison, and multi-modal machine learning. Its power lies in streamlining workflows that need the combined or comparative power of two CLIP models for robust and creative generative AI.

More information