Prompt: This man at the kitchen.

Flux Kontext lets you edit images precisely using text descriptions. Tell it what to change, and it modifies only those specific parts while keeping the rest intact. This is a gamechanger for control, and delivers incredible quality.

What is Flux Kontext?

Source: Flux Kontext

Flux Kontext is an AI image editing model by Black Forest Labs that excels at targeted modifications. Instead of generating entirely new images, it edits existing ones based on your text instructions.

Core capabilities:

  • Local editing: Change specific parts without affecting the whole image
  • Character consistency: Keep people looking the same across multiple edits
  • Style transfer: Apply artistic styles to existing images
  • Multi-round editing: Make several edits in sequence
  • Object manipulation: Add, remove, or modify objects

Available versions: Pro, Max, and Dev. This guide covers Flux1-kontext-dev, which is freely available under a non-commercial license

💡
This guide is dedicated to the exclusive use of the Flux1-kontext-dev only, a model weights openly available under a non-commercial license to break the trend of powerful image editing models being locked behind proprietary API.

Multi-Round Editing (Iterative)

Source: Flux Kontext
💡
Flux1 Dev Kontext enables you to perform multi-round image editing using just one input image, allowing you to apply a series of targeted edits while keeping the original style and details consistent. Each edit builds on the last, so you can refine or transform specific parts of your image step by step without losing quality or introducing inconsistencies. This fast, interactive workflow is ideal for artists and creators who want precise, context-aware control over their image editing process, making it easy to experiment and achieve complex results with ease.

How Multi-Round Editing Works

Flux Kontext's strength is iterative editing. You can:

  1. Start with one image
  2. Make a targeted edit
  3. Use the result for the next edit
  4. Continue refining step by step

Each edit builds on the previous one while maintaining visual consistency and quality.

Limitations of Flux Kontext

Illustration of a FLUX.1 Kontext failure case: After six iterative edits, the generation is visually degraded and contains visible artifacts. Source: Limitations of Flux Kontext
💡
In my experience with FLUX.1 Dev Kontext, I’ve noticed a few limitations in its current implementation. When I engage in extended, multi-turn editing sessions, I sometimes encounter visual artifacts that reduce the overall image quality. There have also been instances where the model doesn’t completely follow my instructions, occasionally missing specific details from my prompts. I’ve found that its limited world knowledge can make it challenging to generate content that is truly contextually accurate. Additionally, I’ve observed that the distillation process can introduce its own set of artifacts, which can impact the fidelity of the final output.
  • Quality degradation: After 6+ iterative edits, images may show artifacts
  • Instruction following: Sometimes misses specific prompt details
  • Limited world knowledge: May struggle with contextually accurate content
  • Distillation artifacts: Processing can introduce visual issues

    Get ready to unlock a smarter, sharper, and more creative approach to image generation—because your next masterpiece is just around the corner!

How to Use Flux Kontext in ComfyUI

Installation guide

  1. Download the workflow file
  2. Open ComfyUI (local or ThinkDiffusion)
  3. Drag the workflow file into the ComfyUI window
  4. If you see red nodes, install missing components:
    • ComfyUI Manager > Install Missing Custom Nodes

Verified to work on ThinkDiffusion Build: June 27, 2025

ComfyUI v0.3.42 with the use flux1-kontext-dev.safetensors

Note: We specify the build date because ComfyUI and custom node versions updated after this date may change the behavior or outputs of the workflow.

Minimum Machine Size: Ultra

Use the specified machine size or higher to ensure it meets the VRAM and performance requirements of the workflow

💡
Download the workflow and drag & drop it into your ComfyUI window, whether locally or on ThinkDiffusion. If you're using ThinkDiffusion, minimum requirement is the Turbo 24gb machine, but we do recommend the Ultra 48gb machine.

Custom Nodes

If there are red nodes in the workflow, it means that the workflow lacks the certain required nodes. Install the custom nodes in order for the workflow to work.

  1. Go to the ComfyUI Manager  > Click Install Missing Custom Nodes
  1. Check the list below if there's a list of custom nodes that needs to be installed and click the install.

Required Models

For this guide you'll need to download these 4 recommended models.

1. flux1-kontext-dev.safetensors
2. clip_l.safetensors
3. t5xxl_fp16.safetensors
4. ae.safetensors
  1. Go to ComfyUI Manager  > Click Model Manager
  1. Search for the models above and when you find the exact model that you're looking for, click install, and make sure to press refresh when you are finished.

If Model Manager doesn't have them: Use direct download links (included with workflow) and upload through ThinkDiffusion MyFiles > Upload URL. Refer our docs for more guidance on this.

You could also use the model path source instead: by pasting the model's link address into ThinkDiffusion MyFiles using upload URL.

Model Name Model Link Address ThinkDiffusion Upload Directory
flux1-kontext-dev.safetensors
📋 Copy Path
.../comfyui/models/diffusion_models/
clip_l.safetensors
📋 Copy Path
.../comfyui/models/clip/
t5xxl_fp16.safetensors
📋 Copy Path
.../comfyui/models/text_encoders/
ae.safetensors
📋 Copy Path
.../comfyui/models/vae/

Step-by-step Workflow Guide

This workflow was pretty easy to set up and runs well from the default settings. Here are a few steps where you might want to take extra note.

Steps Recommended Nodes
1. Load an Image

Load an image. Make sure that it is clear and free from artifacts and blurred. If you want to combine the two image, just enable the 2nd load image.
2. Set the Models

Set the models as seens on the image.
3. Write a Prompt

Write a simple prompt which describes the noun and you want to be for the output. See the examples below for your guidance.
4. Check the Sampling

Set the settings as seen on the image. Do not change the CFG higher as it may cause artifacts.
5. Check the Output

Examples

Character Consistency

Prompt: Use this woman, create an image broadcasting news in television
Prompt: Use this woman, create an image running on a race track

Add / Edit Text

Prompt: Replace the "LAS VEGAS NEVADA" to "SEBASTIAN THE AI EXPERT"
Prompt: Replace the "Wheeler" to "Lonely"

Remove Objects

Prompt: Remove the Beard and moustache, revealing his cleaner face.
Prompt: Remove the bicycles, revealing their legs naturally.

Style Reference

Prompt: Using this style, create an image of New York city at night
Prompt: Using this crochet art style, create a image of a human family in the living room

Switch View

Prompt: Rotate the cat 180 degrees to view directly from behind the cat, showing its back and tail
Prompt: Rotate the camera 90 degrees to view directly from side of the car, showing its side while maintaining its color and shape

Multiple Input Images

Prompt: 1 sweet couple walking at the park and their eyes facing the camera. The woman man is holding a bouquet of flowers and holds the man's arm.
Prompt: 2 pair of shoes displayed at the shoe store.

Change Light

Prompt: Convert to afternoon scene with soft golden sunset light and gentle dusk mist, maintaining the same composition and architectural details
Prompt: Convert to noon scene with sunlight above and gentle noon heat, maintaining the same composition and architectural details

Image Editing

Prompt: Turn the SUV vehicle into green color
Prompt: Turn his coffee into a barbeque

Restyle

Prompt: Transform to watercolor art style
Prompt: Transform this into minimalist lineart style, its an image about the landscape where mountains, sky and trees are visible

Tips for Better Results

  1. Start simple: Begin with basic edits before complex changes
  2. Be specific: Clear descriptions work better than vague ones
  3. Check quality: Monitor for artifacts after each edit
  4. Limit iterations: Avoid more than 5-6 sequential edits
  5. Use good source images: High-quality inputs produce better outputs

Troubleshooting

  • Red nodes: Install missing custom nodes via ComfyUI Manager
  • Model errors: Verify all 4 models are downloaded correctly
  • Poor results: Simplify prompts and retry
  • Artifacts: Reduce CFG settings or start with fresh image




If you’re having issues with installation or slow hardware, you can try any of these workflows on a more powerful GPU in your browser with ThinkDiffusion.