Apply Fooocus Inpaint
The Apply Fooocus Inpaint node applies the Fooocus inpainting technique to masked areas of images in ComfyUI workflows. Using advanced machine learning, it fills, restores, or expands regions in images with seamless, detail-preserving results, making high-quality inpainting accessible and efficient for AI creatives.
Overview
Apply Fooocus Inpaint takes a loaded Fooocus inpaint model, an image, and a mask
(defining the inpaint area), and produces a new image where masked regions have been naturally filled, extended, or corrected.
This node is central to workflows that require object removal, background restoration, or outpainting using the Fooocus model.
It integrates with upstream nodes like Load Fooocus Inpaint and downstream compositors, upscalers,
or save nodes, providing a robust and flexible inpainting solution.
Visual Example
Official Documentation Link
https://www.runcomfy.com/comfyui-nodes/comfyui-inpaint-nodes/INPAINT_ApplyFooocusInpaint
Inputs
| Parameter | Data Type | Input Method | Default |
|---|---|---|---|
| fooocus_inpaint_model | Object (ModelPatcher) | Connection from Load Fooocus Inpaint node | — |
| image | IMAGE | Node input (e.g., Load Image, VAE Decode) | — |
| mask | MASK | Mask node (binary or grayscale, matching image size) | — |
Outputs
| Output Name | Data Type | Description |
|---|---|---|
| inpainted_image | IMAGE | Image with masked regions filled or extended using the Fooocus model |
Usage Instructions
First, use the Load Fooocus Inpaint node to initialize the Fooocus inpainting model from your models directory. Prepare an input image and a binary/grayscale mask that defines the area to be inpainted. Connect the model, image, and mask to the corresponding inputs of Apply Fooocus Inpaint. When executed, the node fills the masked region with content generated by Fooocus, resulting in a seamless, photo-realistic restoration or expansion. Chain further to upscalers or save nodes as needed.
Advanced Usage
Advanced users can batch process multiple images/masks for bulk restoration, chain with automated mask generators or Region Mask Expansion for more natural transitions, and use scripting or variable input to automate prompt-driven outpainting or content transformation. Integrate with hybrid compositing for mixing more than one inpaint solution, or use it as a precursor to differential diffusion and compositing nodes for layered creative edits. Tuning mask softness and inpaint boundaries yields more natural fills in complex compositions.
Example JSON for API or Workflow Export
{
"id": "apply_fooocus_inpaint_1",
"type": "INPAINT_ApplyFooocusInpaint",
"inputs": {
"fooocus_inpaint_model": "@load_fooocus_inpaint_1",
"image": "@input_image_1",
"mask": "@mask_1"
}
}
Tips
- Use accurate, well-aligned masks for best inpainting results—automated mask expansion can help smooth transitions.
- Restart ComfyUI after adding new Fooocus models for the node to detect them correctly.
- Combine with upscaling or blur nodes for enhanced realism on high-res inpaints.
- Test with different mask softness and edge feathering for less visible seams.
- If images become distorted, check that the model and mask dimensions match the input image size precisely.
How It Works (Technical)
This node takes the Fooocus model patch (as provided by Load Fooocus Inpaint), applies it to a loaded image and mask, and uses advanced deep learning techniques to synthesize plausible content for the masked region. The result is composited over the original image, with seamless color, structure, and texture blending, leveraging Fooocus's fine-tuned weights and dynamic CFG strategies for enhanced quality and natural transitions.
Github Alternatives
- comfyui-inpaint-nodes – Complete Fooocus, LaMa, and MAT inpaint integration, compositing, conditioning, mask expansion, automated prefill, and detailer nodes for inpainting workflows.
- ComfyUI-Fooocus-Inpaint-Wrapper – Lightweight wrapper for Fooocus inpaint code, futureproof, and easy to swap in for updated Fooocus versions.
- comfyui-masquerade – Full-res, multi-method mask toolkit for ComfyUI, supports custom mask inpainting and occlusion fill for flexible advanced pipelines.
Videcool workflows
The Apply Fooocus Inpaint node is used in the following Videcool workflows:
FAQ
1. What models are supported by Apply Fooocus Inpaint?
It supports Fooocus inpaint models prepared via the Load Fooocus Inpaint node (typically for SDXL).
2. Does the mask need to be binary?
Binary or grayscale masks can be used, but best results often come from discrete, well-aligned binary masks.
3. Can this be used for outpainting as well as normal inpainting?
Yes, as long as you prepare a mask that defines the outpainting region—Fooocus can intelligently extend image content beyond the borders.
Common Mistakes and Troubleshooting
The most common mistakes involve mask-image dimension mismatch, using uninitialized models, or attempting to use unsupported checkpoint or Fooocus files (ensure models are placed in the models/inpaint directory). If the node produces errors, verify correct loading of model and patch files and that the model is not a distilled/merged variety (use regular SDXL checkpoints for Fooocus Inpaint). When seams are visible, try feathering the mask edges or using a pre-processing blur node to soften transitions.
Conclusion
Apply Fooocus Inpaint is a cornerstone node for state-of-the-art, flexible, and robust image inpainting in ComfyUI, unlocking new creative and repair possibilities using the powerful Fooocus technique within a fully modular workflow.