ThinkDiffusion StableDiffusion ComfyUI introduction to flux shows the crowded the city

In early August 2024, a new player in the ever-changing world of generative art, FLUX AI by Black Forest Labs, bursted onto the scene. Official assessments have shown that the open-source text-to-image suite FLUX performs better than many of the top models in the field. It has already become a useful tool for designers, artists, and content creators in today's fast changing creative world, so let's dive in!


What is Flux?

ThinkDiffusion StableDiffusion ComfyUI introduction to flux shows the sunrise

Flux is a powerful model that can create detailed and realistic images from descriptions, text prompt, or other inputs. It's very flexible and works well with ComfyUI and Forge, both versatile interfaces for creating AI art. This makes it accessible not only to experts but also to beginners who might not have a technical background.

ThinkDiffusion StableDiffusion ComfyUI introduction to flux shows a deer in the beautiful forest landscape

The image quality that FLUX produces is unmatched. It does a great job of understanding and following even the most complicated textual descriptions. It is amazing how well FLUX can handle accurate renders, abstract art, typography and a lot of other artistic styles and types of content.

Text recognition and typography in Flux is insane.

Webinar: Intro to FLUX with Sebastian Kamph

Download the workflows for the webinar here:


Three Flavors of Flux AI

ThinkDiffusion StableDiffusion ComfyUI introduction to flux comparison of major flux models
Image Credit: Reddit - MangledAI

Flux AI comes in three flavors, each built for different needs:

  1. Flux Schnell: The Fastest. This is the basic, quick version similar to SDXL Lightning that uses a low number of steps, definitely optimized for speed. This model focuses on reducing generation time, possibly at the expense of some flexibility or advanced features, slightly compromising image quality to achieve faster generation times.
  2. Flux Dev: Flexible & high quality. When compared to Schnell, Dev excels at being more specific with its prompts, which is great for experimentation and prototyping. It prioritizes flexibility and ease of use over speed.
  3. Flux Pro: This is the premium edition, designed for enterprise and expert usage. This model aims to balance high-speed performance with advanced features and quality. It provide a competitive speed, possibly slower than Schnell but faster than Dev, while still maintaining high-quality output. It incorporates advanced algorithms and optimizations to ensure superior quality.
πŸ’‘
Flux Pro can only be accessed via Black Forest Labs API and with its authorized partners Replicate, fal.ai and mystic.

Model Quantization

Using the official flux1-dev.safetensors and flux1-schnell.safetensors model requires a machine with high VRAM and RAM. Model quantization is a technique that significantly reduces storage requirements, optimizing memory usage, speeding up computation, and lowering power consumption.

Below are some of the quantized variations and their comparisons:

ThinkDiffusion StableDiffusion ComfyUI introduction to flux comparison of quantized models
  • BNB-NF4-V2 - also known as Normalized Float 4, a quantized version released by @lllyasviel. Designed to be as efficient as possible, it delivers unmatched speed without sacrificing accuracy. You can speed up your workflows and do complicated jobs in a lot less time with this tool. It works well with less than 12GB VRAM.
  • FP8 - also known as Float Point 8 released by @Comfy-Org and @Kijai, is much smaller than the original models and can run on 8GB VRAM without a noticeable drop in the quality of creating important text and details.
  • F16, Q2, Q3, Q4, Q5, Q6, Q8 - also known as Georgi Gerganov Unified Format (GGUF), quantized versions developed by @city96. With more than 12GB of VRAM, the Q8 version creates a picture output with similar quality to FP16 at twice the speed. The Q4 version, on the other hand, can run on 8GB VRAM and gives you slightly better generation quality than NF4.

Clip Model

In order for FLUX workflows to work, you need to use a clip model like t5xxl_fp8_e4m3fn.safetensors or t5xxl_fp16.safetensors. Clip models from SDXL such as Clip L and Clip G are also used for the FLUX. 

Google made T5xxL to handle a lot of different natural language processing jobs. It's great at jobs like translation, summarization, answering questions, and text generation. OpenAI is also working on Clip models, which learn visual ideas from how they are described in natural language.


How to Use FLUX

Platform Recommendation

It's highly recommended to use ComfyUI or Forge for FLUX text-to-image generation. In this guide we're going to use ComfyUI because it offers the most versatile interface.

One-Time Setup

Download the workflow above and drag & drop it into your ComfyUI window, whether locally or on ThinkDiffusion. If you're using ThinkDiffusion, it's necessary to use at minimum the Turbo 24gb machine, but we do recommend the Ultra 48gb machine.

Custom Nodes

If there are red nodes in the workflow, it means that the workflow is missing some required nodes. Install the custom nodes in order for the workflow to work.

  1. Go to ComfyUI Manager  > Click Install Missing Custom Nodes
ThinkDiffusion StableDiffusion ComfyUI introduction to flux shows install missing custom nodes
  1. Check the list below if there's a list of custom nodes that needs to be installed and click the install.
ThinkDiffusion StableDiffusion ComfyUI introduction to flux shows where to find list of missing custom nodes

Models

Download the recommended models (see list below) using the ComfyUI manager and go to Install models. Refresh or restart the machine after the files have downloaded.

  1. Go to ComfyUI Manager  > Click Install Models
ThinkDiffusion StableDiffusion ComfyUI introduction to flux shows where to click missing models
  1. When you find the exact model that you're looking for, click install and make sure to press refresh when you are finished.
ThinkDiffusion StableDiffusion ComfyUI introduction to flux shows where to search missing models

Model Path Source

Use the model path source if you prefer to install the models using model's link address and paste into ThinkDiffusion MyFiles using upload URL.

Model Name Model Link Address
flux1-dev.safetensors
πŸ“‹ Copy Path
flux1-schnell.safetensors
πŸ“‹ Copy Path
flux1-dev-fp8.safetensors
πŸ“‹ Copy Path
flux1-schnell-fp8.safetensors
πŸ“‹ Copy Path
flux1-dev-bnb-nf4-v2.safetensors
πŸ“‹ Copy Path
flux1-dev-F16.gguf
πŸ“‹ Copy Path
flux1-dev-Q2.gguf
πŸ“‹ Copy Path
flux1-dev-Q3.gguf
πŸ“‹ Copy Path
flux1-dev-Q4.gguf
πŸ“‹ Copy Path
flux1-dev-Q5.gguf
πŸ“‹ Copy Path
flux1-dev-Q6.gguf
πŸ“‹ Copy Path
flux1-dev-Q8.gguf
πŸ“‹ Copy Path
flux1-schnell-F16.gguf
πŸ“‹ Copy Path
flux1-schnell-Q2.gguf
πŸ“‹ Copy Path
flux1-schnell-Q3.gguf
πŸ“‹ Copy Path
flux1-schnell-Q4.gguf
πŸ“‹ Copy Path
flux1-schnell-Q5.gguf
πŸ“‹ Copy Path
flux1-schnell-Q6.gguf
πŸ“‹ Copy Path
flux1-schnell-Q8.gguf
πŸ“‹ Copy Path
ae.safetensors
πŸ“‹ Copy Path
t5xxl_fp8_e4m3fn.safetensors
πŸ“‹ Copy Path
t5xxl_fp16.safetensors
πŸ“‹ Copy Path
clip_l.safetensors
πŸ“‹ Copy Path

Guide Table for Upload

Recommended Models

Node’s Value Name

Node

ThinkDiffusion Upload File Directory

flux1-dev.safetensors

unet_name

Load Diffusion Model

…/comfyui/models/unet/

flux1-schnell.safetensors

unet_name

Load Diffusion Model

…/comfyui/models/unet/

flux1-dev-fp8.safetensors

unet_name

Load Diffusion Model

…/comfyui/models/unet/

flux1-schnell-fp8.safetensors

unet_name

Load Diffusion Model

…/comfyui/models/unet/

flux1-dev-bnb-nf4-v2.safetensors

ckpt_name

CheckpointLoaderNF4

TD Preloaded model

flux1-dev-F16.gguf

unet_name

Unet Loader (GGUF)

…/comfyui/models/unet/

flux1-dev-Q2.gguf

unet_name

Unet Loader (GGUF)

…/comfyui/models/unet/

flux1-dev-Q3.gguf

unet_name

Unet Loader (GGUF)

…/comfyui/models/unet/

flux1-dev-Q4.gguf

unet_name

Unet Loader (GGUF)

…/comfyui/models/unet/

flux1-dev-Q5.gguf

unet_name

Unet Loader (GGUF)

…/comfyui/models/unet/

flux1-dev-Q6.gguf

unet_name

Unet Loader (GGUF)

…/comfyui/models/unet/

flux1-dev-Q8.gguf

unet_name

Unet Loader (GGUF)

…/comfyui/models/unet/

flux1-schnell-F16.gguf

unet_name

Unet Loader (GGUF)

…/comfyui/models/unet/

flux1-schnell-Q2.gguf

unet_name

Unet Loader (GGUF)

…/comfyui/models/unet/

flux1-schnell-Q3.gguf

unet_name

Unet Loader (GGUF)

…/comfyui/models/unet/

flux1-schnell-Q4.gguf

unet_name

Unet Loader (GGUF)

…/comfyui/models/unet/

flux1-schnell-Q5.gguf

unet_name

Unet Loader (GGUF)

…/comfyui/models/unet/

flux1-schnell-Q6.gguf

unet_name

Unet Loader (GGUF)

…/comfyui/models/unet/

flux1-schnell-Q8.gguf

unet_name

Unet Loader (GGUF)

…/comfyui/models/unet/

ae.safetensors

vae_name

Load VAE

…/comfyui/models/vae/

t5xxl_fp8_e4m3fn.safetensors

clip_name

DualClipLoader

…/comfyui/models/clip/

t5xxl_fp16.safetensors

clip_name

DualClipLoader

…/comfyui/models/clip/

clip_l.safetensors

clip_name

DualClipLoader

…/comfyui/models/clip/

Reminder

πŸ’‘
Refresh or restart the machine after uploading the files in ThinkDiffusion's My Files.
πŸ’‘
If you have downloaded the ae.sft model, just rename it to ae.safetensors. They are the same although different sources are using different names.
πŸ’‘
You can upload models by copying the link address of download button/icon from Civitai or Huggingface and paste into the Upload section of ThinkDiffusion My Files using the copied URL.

Procedures

Now that the hard work is out of the way, let's get creative. You need to follow the steps from top to bottom. The workflow is a one-click process after everything has been set up.

πŸ’‘
Select a 1 workflow only when running a prompt. Use the Group Bypasser switch node that can be found in the center.

Guide for Original Models Workflow

Steps Description / Impact Default / Recommended Values Required Change
Select a Flux model Specifies the name of the U-Net model to be loaded. This name is used to locate the model within a predefined directory structure, enabling the dynamic loading of different U-Net models. ThinkDiffusion-StableDiffusion-ComfyUI-introduction-to-flux-original-workflow-load-diffusion-model.png YES
Check and Select your desired dimension and batch size Set the height and weight and this will be the basis for the generation output dimension. Batch size refers to the number of output per batch process. ThinkDiffusion-StableDiffusion-ComfyUI-introduction-to-flux-original-workflow-image-resize-batch-size.png YES
Check the 4 nodes and set your preferred clip model. ModelSamplingFlux adjusts the model's sampling behavior based on resolution and other parameters. DualCLIPLoader node is designed for loading two CLIP models simultaneously, facilitating operations that require the integration or comparison of features from both models. Highly recommend the euler and normal. ThinkDiffusion-StableDiffusion-ComfyUI-introduction-to-flux-original-workflow-4-nodes.png YES
Write a text prompt for Flux. Check the seed and its default fluxguidance. The CLIP Text Encode node can be used to encode a text prompt using a CLIP model into an embedding that can be used to guide the diffusion model towards generating specific images. FluxGuidance assists in the image generation process. It is compared to the CFG (Control Flow Guidance) of Stable Diffusion, with the video script noting that it has a default value set lower than the CFG, affecting the generation process. ThinkDiffusion-StableDiffusion-ComfyUI-introduction-to-flux-original-workflow-text-encode-flux-guidance.png YES
Run the Queue Prompt

Guide for FP8 and NF4 Workflow

Steps Description / Impact Default / Recommended Values Required Change
Use the Fast Bypasser for selecting a 1 Flux model. Specifies the name of the U-Net model to be loaded based on the customized Fast Bypasser widget. This name is used to locate the model within a predefined directory structure, enabling the dynamic loading of different U-Net models. Enable only 1 model to process. ThinkDiffusion-StableDiffusion-ComfyUI-introduction-to-flux-fp8-nf4-workflow-fast-bypasser.png YES
Check and Select your desired dimension and batch size Set the height and weight and this will be the basis for the generation output dimension. Batch size refers to the number of output per batch process. ThinkDiffusion-StableDiffusion-ComfyUI-introduction-to-flux-fp8-nf4-workflow-image-resize-batch-size.png YES
Check the 4 nodes and set your preferred clip model. ModelSamplingFlux adjusts the model's sampling behavior based on resolution and other parameters. DualCLIPLoader node is designed for loading two CLIP models simultaneously, facilitating operations that require the integration or comparison of features from both models. Highly recommend the euler and normal. ThinkDiffusion-StableDiffusion-ComfyUI-introduction-to-flux-fp8-nf4-workflow-4-nodes.png YES
Write a text prompt for Flux. Check the seed and its default fluxguidance. The CLIP Text Encode node can be used to encode a text prompt using a CLIP model into an embedding that can be used to guide the diffusion model towards generating specific images. FluxGuidance assists in the image generation process. It is compared to the CFG (Control Flow Guidance) of Stable Diffusion, with the video script noting that it has a default value set lower than the CFG, affecting the generation process. ThinkDiffusion-StableDiffusion-ComfyUI-introduction-to-flux-fp8-nf4-workflow-text-encode-flux-guidance.png YES
Run the Queue Prompt

Guide for GGUF Workflow

Steps Description / Impact Default / Recommended Values Required Change
Select a Flux model by using the Unet Loader GGUF and a select a 2 Clip model. Specifies the name of the U-Net model to be loaded. This name is used to locate the model within a predefined directory structure, enabling the dynamic loading of different U-Net models. DualCLIPLoader node is designed for loading two CLIP models simultaneously, facilitating operations that require the integration or comparison of features from both models. Highly recommend the euler and normal. ThinkDiffusion-StableDiffusion-ComfyUI-introduction-to-flux-gguf-workflow-dualcliploader-unetloader.png YES
Write a text prompt for Flux. Check the seed and its default fluxguidance. The CLIPTextEncodeFlux node can be used to encode a text prompt using a CLIP model into an embedding that can be used to guide the diffusion model towards generating specific images. Guidance assists in the image generation process. It is compared to the CFG (Control Flow Guidance) of Stable Diffusion, with the video script noting that it has a default value set lower than the CFG, affecting the generation process. ThinkDiffusion-StableDiffusion-ComfyUI-introduction-to-flux-gguf-workflow-cliptextencodeflux-guidance.png YES
Check the Ksampler and Resize Node A modded KSampler efficient with the ability to preview/output images and run scripts. A node where you can set the Steps and CFG. ThinkDiffusion-StableDiffusion-ComfyUI-introduction-to-flux-gguf-workflow-ksampler-efficient-with-resize.png YES
πŸ’‘
If you have less than 32GB of System RAM, use the t5xxl_fp8_e4m3fn text encoder instead of the t5xxl_fp16 version.
πŸ’‘
fp8 degrades the quality a bit so if you have the resources the official full 16 bit version is recommended.

Flux AI Prompt Examples

You can check the examples below together with its prompt settings.

The Dense Jungle

Prompt - A dense jungle filled with exotic wildlife and towering trees. Parrots chatter from the canopy, and a narrow trail winds through thick foliage to a hidden waterfall cascading into a crystal-clear pool.

Model - flux1-schnell-Q8.gguf, Steps 20, CFG 1, Clip 1 - t5xxl-fp8, Clip 2 - Clip L, Seed - 71612952798766, Sampler - Euler, Scheduler - Normal, Denoise 1

The Newsroom Meeting

Prompt - A bustling newsroom, with reporters hurrying to meet deadlines and phones ringing incessantly. Papers are scattered across desks, and large screens display breaking news from around the world.

Model - flux1-dev-Q8.gguf, Steps 20, CFG 1, Clip 1 - t5xxl-fp8, Clip 2 - Clip L, Seed - 1084972251415857, Sampler - Euler, Scheduler - Normal, Denoise 1

The Japanese Garden

Prompt - A serene Japanese garden in springtime, complete with cherry blossoms in full bloom. A stone path winds through manicured greenery, leading to a tranquil koi pond and a traditional tea house.

Model - flux1-dev-bnb-nf4.safetensors, Steps 20, CFG 1, Clip 1 - t5xxl-fp8, Clip 2 - Clip L, Seed - 1066588834590345, Sampler - Euler, Scheduler - Normal, Denoise 1

The Crowded Carnival

Prompt - A crowded carnival at dusk, with the sounds of laughter and the smell of popcorn in the air. Brightly colored rides and game booths line the midway, and the Ferris wheel lights up against the darkening sky.

Model - flux1-dev-bnb-nf4.safetensors, Steps 30, CFG 1, Clip 1 - t5xxl-fp16, Clip 2 - Clip L, Seed - 635804676789378, Sampler - Euler, Scheduler - Normal, Denoise 1


Resources

Intro to Flux - Google Drive

It contains the samples of generated Flux art.


If you’re having issues with installation or slow hardware, you can try any of these workflows on a more powerful GPU in your browser with ThinkDiffusion.

If you enjoy ComfyUI and you want to test out creating awesome animations, then feel free to check out this AnimateDiff tutorial here.