Flux Kontext Image Edit ComfyUI workflow for Videcool
The Flux Kontext Image Edit workflow in Videcool provides a powerful and flexible way to edit images while preserving character identity and style consistency. Designed for speed, clarity, and fine-grained control, this workflow is served by ComfyUI and uses the Flux 1 Kontext AI image editing model repackaged for ComfyUI.
What can this ComfyUI workflow do?
In short: Image editing with context-preserving diffusion.
This workflow takes an input image and an editing prompt, then uses diffusion technology to modify the image while keeping the main subject, pose, or identity consistent. It interprets your prompt and reference image together, and outputs detailed, coherent edits such as outfit changes, background modifications, or style adjustments. The underlying Flux 1 Kontext model is optimized for consistent character generation and image-to-image transformations rather than pure text-to-image synthesis.
Example usage in Videcool
Download the ComfyUI workflow
Download ComfyUI Workflow file: i2i_flux_1_kontext_dev_api.json
Image of the ComfyUI workflow
This figure provides a visual overview of the workflow layout inside ComfyUI. Each node is placed in logical order to establish a clean and efficient editing pipeline, starting from loading the image and model, through text and image conditioning, scaling, sampling, and final decoding. The structure makes it easy to understand how the diffusion model, CLIP encoders, context conditioning nodes, sampler, and VAE interact. Users can modify or expand parts of the workflow to create custom editing variations or batch processing setups.
Installation steps
Step 1: Download flux1-dev-kontext_fp8_scaled.safetensors into /ComfyUI/models/diffusion_models/flux1-dev-kontext_fp8_scaled.safetensors.Step 2: Rename flux1-dev-kontext_fp8_scaled.safetensors to flux1-kontext-dev.safetensor
Step 3: Download clip_l.safetensors into /ComfyUI/models/text_encoders/clip_l.safetensors.
Step 4: Download t5xxl_fp8_e4m3fn_scaled.safetensors into /ComfyUI/models/text_encoders/t5xxl_fp8_e4m3fn_scaled.safetensors.
Step 5: Download ae.safetensors into /ComfyUI/models/vae/ae.safetensors.
Step 6: Download the i2i_flux_1_kontext_dev_api.json workflow file into your home directory.
Step 7: Restart ComfyUI so the new model and encoder files are detected.
Step 8: Open the ComfyUI graphical user interface (ComfyUI GUI).
Step 9: Load the i2i_flux_1_kontext_dev_api.json workflow in the ComfyUI GUI.
Step 10: In the Load Image node, select the image you want to edit, then enter an edit prompt in the Clip Text Encode node.
Step 11: Hit run to generate an edited version of the image that preserves the main character or layout while applying the requested changes.
Step 12: Open Videcool in your browser, select the Flux Kontext image edit tool, and use the generated outputs directly inside your video or image projects.
Installation video
The workflow requires a source image and an edit prompt plus a few basic parameter adjustments to begin editing. After loading the JSON file, users can select the input image, guidance scale, sampling steps, image scaling options, and prompt text that describes the desired changes. Once executed, the sampler processes both the image and textual context in latent space and produces a final decoded image that can be saved and reused across other Videcool tools.
Prerequisites
To run the workflow correctly, download the following model files and place them into your ComfyUI directory. These files ensure the model can interpret language, understand visual context, and decode the final images. Proper installation into the following location is essential before running the workflow: {your ComfyUI director}/models.
ComfyUI\models\diffusion_models\flux1-dev-kontext_fp8_scaled.safetensors
https://huggingface.co/Comfy-Org/flux1-kontext-dev_ComfyUI/resolve/main/split_files/diffusion_models/flux1-dev-kontext_fp8_scaled.safetensors
ComfyUI\models\text_encoders\clip_l.safetensors
https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors
ComfyUI\models\text_encoders\t5xxl_fp8_e4m3fn_scaled.safetensors
https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp8_e4m3fn_scaled.safetensors
ComfyUI\models\vae\ae.safetensors
https://huggingface.co/Comfy-Org/Lumina_Image_2.0_Repackaged/resolve/main/split_files/vae/ae.safetensors
How to use this workflow in Videcool
Videcool integrates seamlessly with ComfyUI, allowing users to perform advanced Flux Kontext edits without dealing directly with the node graph. After importing the workflow file into ComfyUI and generating edited images, Videcool can load these results as assets for scenes, slides, or sequences. This makes context-preserving character and object editing intuitive and accessible, even for users who are not keen on learning how ComfyUI works, while still benefiting from its high-end diffusion capabilities.
ComfyUI nodes used
This workflow uses the following nodes. Each node performs a specific role, such as loading models, encoding text and vision features, stitching and scaling reference images, conditioning the diffusion process, sampling, and finally decoding and saving the edited output. Together they create a reliable and modular pipeline that can be easily extended or customized for different image editing tasks.
- Load Diffusion Model
- DualCLIPLoader
- Load VAE
- Load Image
- Clip Text Encode
- Image Stitch
- ConditioningZeroOut
- FluxKontextImageScale
- VAE Encode
- ReferenceLatent
- FluxGuidance
- KSampler
- VAE Decode
- Save Image
Base AI model
This workflow is built on Black Forest Labs’ FLUX.1-Kontext-dev model, a modern and highly capable diffusion-based text-to-image generator. FLUX.1-Kontext-dev provides clarity, coherence, and creative flexibility, making it suitable for both artistic and commercial use cases. The model benefits from advanced training data and offers consistent results across a variety of styles. More details, model weights, and documentation can be found on the following links:
Hugging face repository:
https://huggingface.co/black-forest-labs/FLUX.1-Kontext-devOfficial GitHub repository:
https://github.com/black-forest-labs/fluxDeveloper Black Forest Labs
https://bfl.aiAPI documentation
https://docs.bfl.ai/quick_start/introductionImage resolution
Flux Kontext image editing works best when source images are resized to resolutions that are divisible by 32, which keeps the latent grid aligned with the model’s internal architecture. For most portrait and character edits, resolutions around 1024 pixels on the long side provide a good balance between detail and performance, but other aspect ratios are also supported.
Conclusion
The Flux Kontext Image Edit workflow is a robust, powerful, and user-friendly solution for performing context-aware image editing in Videcool. With its combination of high-quality Flux-based models, a modular ComfyUI pipeline, and seamless platform integration, it enables beginners and professionals alike to produce consistent character edits and sophisticated image variations with ease. By understanding the workflow components and advantages, users can unlock the full potential of AI-assisted context-aware image editing in Videcool.