Wan 2.2 Image Animate and Character Replace ComfyUI Workflow for Videcool

The Wan 2.2 Image Animate and Character Replace workflow in Videcool provides a powerful and flexible way to generate animated videos and perform character replacement directly from text prompts. Designed for speed, clarity, and creative control, this workflow is served by ComfyUI and uses the Wan 2.2 AI video generation model developed by Kijai.

What can this ComfyUI workflows do?

In short: Image animation and character replacement in video.

This workflow converts static images into animated videos and performs character replacement using diffusion technology. It interprets your input image and motion prompts, and outputs detailed, coherent animated sequences with high fidelity. The base AI model it uses is optimized for various resolutions and can produce videos in flexible aspect ratios with smooth frame interpolation.

Example usage in Videcool

Figure 1 - Wan 2.2 Image Animate ComfyUI workflow in Videcool

Figure 2 - Wan 2.2 Image Character Replace ComfyUI workflow in Videcool

Download the ComfyUI workflows

Download ComfyUI Workflow files: Wan2.2Animate-image-api.json | wan2.2Animate-character-replace-api.json

Image of the ComfyUI workflows

This figure provides a visual overview of the workflow layout inside ComfyUI. Each node is placed in logical order to establish a clean and efficient generation pipeline. The structure makes it easy to understand how the text encoders, model loader, sampler, and VAE interact. Users can modify or expand parts of the workflow to create custom variations.

Figure 3 - Wan 2.2 Image Animate workflow

Figure 4 - Wan 2.2 Image Character replace workflow

Installation steps

Step 1: Download Wan2_2-Animate-14B_fp8_e4m3fn_scaled_KJ.safetensors into /ComfyUI/models/diffusion_models/wan2.2_animate_combined.safetensors
Step 2: Download umt5-xxl-enc-bf16.safetensors into /ComfyUI/models/text_encoders/umt5-xxl-enc-bf16.safetensors
Step 3: Download Wan2_1_VAE_bf16.safetensors into /ComfyUI/models/vae/Wan2_1_VAE_bf16.safetensors
Step 4: Download clip_vision_h.safetensors into /ComfyUI/models/clip_vision/clip_vision_h.safetensors
Step 5: Download lightx2v_I2V_14B_480p_cfg_step_distill_rank64_bf16.safetensors into /ComfyUI/models/loras/
Step 6: Download the Wan2.2Animate-image-api.json workflow file into your home directory
Step 7: Restart ComfyUI
Step 8: Open the ComfyUI graphical user interface (ComfyUI GUI)
Step 9: Load the Wan2.2Animate-image-api.json in the ComfyUI GUI
Step 10: Open Videcool in your browser, select image animate, and choose Wan2.2 to generate animated videos

Installation video

The workflow requires only an input image and a few basic parameter adjustments to begin generating animated videos. After loading the JSON file, users can select guidance scale, sampling steps, resolution, and animation parameters. Once executed, the sampler processes the latent representation and produces a final decoded video sequence. The result can be saved and reused across other Videcool tools. Check out the following video to see the model in action:

Prerequisites

To run the workflow correctly, download the following model files and place them into your ComfyUI directory. These files ensure the model can interpret text and images, convert inputs into latent embeddings, and decode the final video sequences. Proper installation into the following location is essential before running the workflow: {your ComfyUI director}/models.

ComfyUI\models\diffusion_models\wan2.2_animate_combined.safetensors
https://huggingface.co/Kijai/WanVideo_comfy_fp8_scaled/resolve/main/Wan22Animate/Wan2_2-Animate-14B_fp8_e4m3fn_scaled_KJ.safetensors

ComfyUI\models\text_encoders\umt5-xxl-enc-bf16.safetensors
https://huggingface.co/Kijai/WanVideo_comfy/resolve/main/umt5-xxl-enc-bf16.safetensors

ComfyUI\models\vae\Wan2_1_VAE_bf16.safetensors
https://huggingface.co/Kijai/WanVideo_comfy/resolve/main/Wan2_1_VAE_bf16.safetensors

ComfyUI\models\clip_vision\clip_vision_h.safetensors
https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/clip_vision/clip_vision_h.safetensors

How to use this workflows in Videcool

Videcool integrates seamlessly with ComfyUI, allowing users to load workflows directly and generate animated videos without external complexity. After importing the workflow file, simply enter your input image and animation parameters, then click generate. The system handles all backend interactions with ComfyUI. This makes video creation intuitive and accessible, even for users who are not keen on learning how ComfyUI works. The following video shows how this model can be used in videcool:

Image Animate

Character Replace

ComfyUI nodes used

This workflow uses the following nodes. Each node performs a specific role, such as loading models, encoding text and images, sampling, and finally decoding the output. Together they create a reliable and modular pipeline that can be easily extended or customized.

  • Load CLIP
  • Load VAE
  • Load CLIP Vision
  • CLIP Vision Encode
  • Load Image
  • LoraLoaderModelOnly
  • Load Diffusion Model
  • CLIP Text Encode
  • TrimVideoLatent
  • VAE Decode
  • ModelSamplingSD3
  • WanAnimateToVideo
  • KSampler
  • DWPose Estimator
  • Pixel Perfect Resolution
  • PrimitiveInt
  • Upscale Image
  • ImageFromBatch
  • Video Combine
  • Load Video (Upload)
  • CLIPLoader
  • VAELoader
  • CLIPVisionLoader
  • CLIPVisionEncode
  • LoadImage
  • UNETLoader
  • CLIPTextEncode
  • GetVideoComponents
  • VAEDecode
  • Sam2Segmentation
  • DownloadAndLoadSAM2Model
  • GrowMask
  • BlockifyMask
  • DrawMaskOnImage
  • LoadVideo
  • PixelPerfectResolution
  • ImageScale
  • Points Editor

Base AI model

This workflow is built on Kijai's Wan 2.2 model, a modern and highly capable diffusion-based video generation engine. Wan 2.2 provides clarity, coherence, and creative flexibility, making it suitable for both artistic and commercial use cases. The model benefits from advanced training techniques and offers consistent results across a variety of motion and animation styles. More details, model weights, and documentation can be found on the following links:

Hugging face repository:

https://huggingface.co/Kijai/WanVideo_comfy

Developer Kijai

https://huggingface.co/Kijai

ComfyUI Custom Nodes Required

comfyui_controlnet_aux
ComfyUI-VideoHelperSuite
ComfyUI-KJNodes
ComfyUI-segment-anything-2

Video resolution

AI video generation models perform best when they generate videos in their native resolution, that was used for training. For this model information about the best resolution can be found below:

Native video size: 480x480px or 768x768px
The model supports other resolutions. Best resolutions are multiples of 32px.

Conclusion

The Wan 2.2 Image Animate and Character Replace workflow is a robust, powerful, and user-friendly solution for generating AI-driven animated videos in Videcool. With its combination of high-quality models, a modular ComfyUI pipeline, and seamless platform integration, it enables beginners and professionals alike to produce creative and commercial-grade video content with ease. By understanding the workflow components and advantages, users can unlock the full potential of AI-assisted video generation in Videcool.

More information