Flux Fill and Flux Redux Workflow

Ever needed to add something to an image that wasn't there before? That's where Flux Fill and Flux Redux come in – they're changing the game for image editing by making inpainting (filling in parts of images) look natural and professional.

By using models such as Flux Fill and Flux Redux, image editors are now capable of achieving results that were previously unimaginable.

This tutorial will delve into the intricacies of these methods, providing a comprehensive guide on how to seamlessly integrate reference images into your inpainting workflow. Whether you're looking to remove unwanted objects, restore damaged areas, or simply enhance the overall aesthetic of your images, this guide will equip you with the best workflow. As you embark on this journey, remember that the art of inpainting is not just about fixing flaws, but about breathing new life into your images. So, let's dive in and explore.

What is Flux Fill and Flux Redux?


Flux Fill and Flux Redux are specialized AI models designed for image inpainting and editing tasks within the Stable Diffusion ecosystem.

Flux Fill is an AI model specifically created for filling in missing or masked parts of images. It excels at generating content that seamlessly blends with the existing image, maintaining consistency in style, lighting, and context. It's particularly useful when you need to add new elements or remove unwanted objects from photos.

Flux Redux is a complementary model that focuses on enhancing the quality and coherence of the inpainted areas. It helps refine the results from Flux Fill by improving details, ensuring better integration with the original image, and maintaining visual consistency across the edited regions.

Together, these models create a powerful workflow for image editing that produces more natural and professional results than standard inpainting approaches. They're particularly effective when used with reference images, which help guide the AI to generate content that matches your specific needs.

Why Use Reference Images for Flux Inpainting?

 Inpaint using Flux
πŸ’‘
When I inpaint images, using a reference image helps me fill in missing or damaged parts by using another a high-quality input image. This makes the results look more real and natural. I use the reference image to keep the style, texture, and lighting of the original picture. It makes the results more accurate, especially with complicated scenes or objects.

How to Run Image Inpainting with Reference Images in ComfyUI

Installation guide

πŸ’‘
Download the workflow and drag & drop it into your ComfyUI window, whether locally or on ThinkDiffusion. If you're using ThinkDiffusion, minimum requirement is the Turbo 24gb machine, but we do recommend the Ultra 48gb machine.

Custom Nodes

If there are red nodes in the workflow, it means that the workflow lacks the certain required nodes. Install the custom nodes in order for the workflow to work.

  1. Go to ComfyUI Manager  > Click Install Missing Custom Nodes
  1. Check the list below if there's a list of custom nodes that needs to be installed and click the install.

Models

For this guide you'll need to download these 6 recommended models and 1 model that will be auto-downloaded.

πŸ’‘
- sigclip_vision_patch14_384.safetensors
- flux1-fill-dev.safetensors
- flux1-redux-dev.safetensors
- t5xxl_fp16.safetensors
- clip_l.safetensors
- gokaygokay/Florence-2-SD3-Captioner (auto-download)
- ae.safetensors

Models can be downloaded from Comfy.Org or the model manager.
  1. Go to ComfyUI Manager  > Click Model Manager
  1. Search for the models above and when you find the exact model that you're looking for, click install, and make sure to press refresh when you are finished.

Optional Model Path Source

Some of these models may not be available in the model manager.

You could also use the model path source instead: by pasting the model's link address into ThinkDiffusion MyFiles using upload URL.

Model Name Model Link Address ThinkDiffusion Upload Directory
sigclip_vision_patch14_384.safetensors
πŸ“‹ Copy Path
.../comfyui/models/clip_vision/
flux1-fill-dev.safetensors
πŸ“‹ Copy Path
.../comfyui/models/diffusion_models/
flux1-redux-dev.safetensors
πŸ“‹ Copy Path
.../comfyui/models/style_models/
t5xxl_fp16.safetensors
πŸ“‹ Copy Path
.../comfyui/models/clip/
clip_l.safetensors
πŸ“‹ Copy Path
.../comfyui/models/clip/
gokaygokay/Florence-2-SD3-Captioner
auto-download
auto-download
ae.safetensors
πŸ“‹ Copy Path
.../comfyui/models/vae/

Step-by-step Workflow Guide

This workflow was pretty easy to set up and runs well from the default settings. Here are a few steps where you might want to take extra note.

Steps Recommended Nodes
1. Load Image

Load images for main and reference images. Just keep the settings for Florence. Reference image should serve as a subject for inpainting and should be a high quality, high resolution, and not blurred.
ThinkDiffusion-StableDiffusion-ComfyUI-Image2Image-Inpainting-ReferenceImage-load-image.png
2. Set Models

Set the exact models as seen on the image.
ThinkDiffusion-StableDiffusion-ComfyUI-Image2Image-Inpainting-ReferenceImage-set-the-models.png
3. Check the Prompt

Check if the prompt generated by Florence describes the reference image along with the main image. It cannot be edited but you can add a few prompts in the next node.
ThinkDiffusion-StableDiffusion-ComfyUI-Image2Image-Inpainting-ReferenceImage-check-the-prompt.png
4. Check the Generation Settings

Keep the settings as seen with image.
ThinkDiffusion-StableDiffusion-ComfyUI-Image2Image-Inpainting-ReferenceImage-check-generation-settings.png
5. Check the Generated Image

There are two image preview nodes. The left is for the comparison between the reference and inpainted image. The right node is preview for the cropped image.
ThinkDiffusion-StableDiffusion-ComfyUI-Image2Image-Inpainting-ReferenceImage-check-generation-image.png

Examples

Below are the examples I generated, along with the various input reference images. These examples show a significant improvement in the main image where the newly object placed definitely adapts to angles and depth of the image.


Table Area Mockup

3 Reference Images: Pizza, Plate with Fork and Knife, Cake

Landscaping

3 Reference Images: Garden Bench, Garden Path, Garden Fountain

Interior Designing

4 Reference Images: Hanging TV and cabinet, Ceiling Lamp, Center Table, Last Supper Painting

Product Editing

1 Flux Inpainting Reference Image: Women's Floral Bag
1 Reference Image: Women's Floral Bag

Apparel Editing

2 Reference Images: Printed Black T-shirt, Tattered Khaki Jeans
2 Reference Images: Printed Black T-shirt, Tattered Khaki Jeans

If you’re having issues with installation or slow hardware, you can try any of these workflows on a more powerful GPU in your browser with ThinkDiffusion.

If you enjoy ComfyUI and you want to test out HyperSD in ComfyUI and Blender in real-time, then feel free to check out this Real-Time Creativity: Leveraging Hyper SD and Blender with ComfyUI. And have fun out there with your videos!