Flux.1-dev Text to Image ComfyUI workflow for Videcool
The Flux.1-dev Text-to-Image workflow in Videcool provides a powerful and flexible way to generate high-quality images directly from text prompts. Designed for speed, clarity, and creative control, this workflow is served by ComfyUI and uses the FLUX.1-dev AI text to image model developed by Black Forest Labs.
What can this ComfyUI workflow do?
In short: Text to image conversion.
This workflow converts written text prompts into fully generated images using diffusion technology. It interprets your prompt, and outputs detailed, coherent visuals with high fidelity. The base AI model it uses is optimized for 1024×1024 native resolution but can also produce images in flexible aspect ratios.
Example usage in Videcool
Download the ComfyUI workflow
Download ComfyUI Workflow file: flux-image-generation.json
Image of the ComfyUI workflow
This figure provides a visual overview of the workflow layout inside ComfyUI. Each node is placed in logical order to establish a clean and efficient generation pipeline. The structure makes it easy to understand how the text encoders, model loader, sampler, and VAE interact. Users can modify or expand parts of the workflow to create custom variations.
Installation steps
Step 1: Download flux1-dev.safetensors into /ComfyUI/models/diffusion_models/flux1-dev.safetensorsStep 2: Download clip_l.safetensors into /ComfyUI/models/text_encoders/clip_l.safetensors
Step 3: Download t5xxl_fp8_e4m3fn.safetensors into /ComfyUI/models/text_encoders/t5xxl_fp8_e4m3fn.safetensors
Step 4: Download ae.safetensors into /ComfyUI/models/vae/ae.safetensors
Step 5: Download the flux-image-generation.json workflow file into your home directory
Step 6: Restart ComfyUI
Step 7: Open the ComfyUI graphical user interface (ComfyUI GUI)
Step 8: Load the flux-image-generation.json in the ComfyUI GUI
Step 9: Enter a text prompt into the "Clip Text Encode (Positive Prompt)" node and hit run to generate an image
Step 10: Open Videcool in your browser, select text to image, and choose Flux1-Dev to generate an image
Installation video
The workflow requires only a text prompt and a few basic parameter adjustments to begin generating images. After loading the JSON file, users can select guidance scale, sampling steps, resolution, and prompt text. Once executed, the sampler processes the latent representation and produces a final decoded image. The result can be saved and reused across other Videcool tools. Check out the following video to see the model in action:
Prerequisites
To run the workflow correctly, download the following model files and place them into your ComfyUI directory. These files ensure the model can interpret language, convert prompts into latent embeddings, and decode the final images. Proper installation into the following location is essential before running the workflow: {your ComfyUI director}/models.
ComfyUI\models\diffusion_models\flux1-dev.safetensors
https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/flux1-dev.safetensors
ComfyUI\models\text_encoders\clip_l.safetensors
https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors
ComfyUI\models\text_encoders\t5xxl_fp8_e4m3fn.safetensors
https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp8_e4m3fn.safetensors
ComfyUI\models\vae\ae.safetensors
https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/ae.safetensors
How to use this workflow in Videcool
Videcool integrates seamlessly with ComfyUI, allowing users to load workflows directly and generate images without external complexity. After importing the workflow file, simply enter your prompt and click generate. The system handles all backend interactions with ComfyUI. This makes image creation i ntuitive and accessible, even for users who are not keen on learning how ComfyUI works. The following video shows how this model can be used in videcool:
ComfyUI nodes used
This workflow uses the following nodes. Each node performs a specific role, such as loading models, encoding text, sampling, and finally decoding the output. Together they create a reliable and modular pipeline that can be easily extended or customized.
- EmptySD3LatentImage
- Load Diffusion Model
- DualCLIPLoader
- Load VAE
- Clip Text Encode
- FluxGuidance
- KSampler
- VAE Decode
- Save Image
Base AI model
This workflow is built on Black Forest Labs’ FLUX.1-dev model, a modern and highly capable diffusion-based text-to-image generator. FLUX.1-dev provides clarity, coherence, and creative flexibility, making it suitable for both artistic and commercial use cases. The model benefits from advanced training data and offers consistent results across a variety of styles. More details, model weights, and documentation can be found on the following links:
Hugging face repository:https://huggingface.co/black-forest-labs/FLUX.1-dev
Official GitHub repository:https://github.com/black-forest-labs/flux
Developer Black Forest LabsAPI documentation
https://docs.bfl.ai/quick_start/introduction
Image resolution
AI text to image models perform best when they generate images in their native resolution, that was used for training. For this model information about the best resolution can be found below:
Native image size: 1024x1024pxThe model supports other resolutions. Best resolutions are multiples of 32px.
Conclusion
The Flux.1-dev Text-to-Image workflow is a robust, powerful, and user-friendly solution for generating AI-driven visuals in Videcool. With its combination of high-quality models, a modular ComfyUI pipeline, and seamless platform integration, it enables beginners and professionals alike to produce creative and commercial-grade images with ease. By understanding the workflow components and advantages, users can unlock the full potential of AI-assisted image generation in Videcool.