ThinkDiffusion StableDiffusion ComfyUI Character Consistency a character sheet of a man wearing a brown jacket

Ever find yourself struggling to keep your characters looking the same across different projects? You're not alone! Creating characters that look the same every time is crucial for a cohesive story.

This guide will show you how to maintain consistency by using a workflow in ComfyUI. You can save time and focus more on the creative aspects.

Let's explore how these resources can elevate your designs to the next level, bringing your ideas to life with consistency and flair. We'd like to credit Mickmumpitz for his outstanding work in the development of this workflow.

How to Use

One-Time Setup

💡
Download the workflow and drag & drop it into your ComfyUI window, whether locally or on ThinkDiffusion. If you're using ThinkDiffusion, it's necessary to use at minimum the Turbo 24gb machine, but we do recommend the Ultra 48gb machine.

Custom Node

If there are red nodes in the workflow, it means that the workflow lacks the certain required nodes. Install the custom nodes in order for the workflow to work.

  1. Go to ComfyUI Manager  > Click Install Missing Custom Nodes
ThinkDiffusion StableDiffusion ComfyUI Character Consistency install missing custom nodes
  1. Check the list below if there's a list of custom nodes that needs to be installed and click the install.
ThinkDiffusion StableDiffusion ComfyUI Character Consistency check the list of missing of custom nodes

Models

Download the recommended models (see list below) using the ComfyUI manager and go to Install models. Refresh or restart the machine after the files have downloaded.

  1. Go to ComfyUI Manager  > Click Install Models
ThinkDiffusion StableDiffusion ComfyUI Character Consistency install models

2. When you find the exact model that you're looking for, click install and make sure to press refresh when you are finished.

ThinkDiffusion StableDiffusion ComfyUI Character Consistency check the list of custom nodes

Model Path Source

Use the model path source if you prefer to install the models using model's link address and paste into ThinkDiffusion MyFiles using upload by URL method.

Model Name Model Link Address
flux1-dev-fp8.safetensors
📋 Copy Path
t5xxl_fp8_e4m3fn.safetensors
📋 Copy Path
clip_l.safetensors
📋 Copy Path
ae.sft or ae.safetensors
📋 Copy Path
FLUX.1-dev-ControlNet-Union-Pro or diffusion_pytorch_model.safetensors
📋 Copy Path
4x-ClearRealityV1.pth
📋 Copy Path
face_yolov8m.pt
📋 Copy Path

Embedded JavaScript

Guide Table for Upload

Recommended Models

Node’s Value Name

Node

ThinkDiffusion Upload File Directory

flux1-dev-fp8.safetensors

unet_name

Load Diffusion Model

…comfyui/models/unet/

t5xxl_fp8_e4m3fn.safetensors

clip_name1

DualClipLoader

…comfyui/models/clip/

clip_l.safetensors

clip_name2

DualClipLoader

…comfyui/models/clip/

ae.sft

vae_name

Load VAE

…comfyui/models/vae/

diffusion_pytorch_model.safetensors

control_net_name

Load Controlnet Model

…comfyui/models/controlnet/

4x-ClearRealityV1.pth

model_name

Load Upscale Model

…comfyui/models/upscale/

face_yolov8m.pt

model_name

UltralyticsDetectorProvider

…comfyui/models/ultralytics/bbox/

Procedures

Steps Default Node
Load the pose sheet image (find the pose sheet image above next to the workflow) ThinkDiffusion StableDiffusion ComfyUI Character Consistency load an image
Write a Prompt ThinkDiffusion StableDiffusion ComfyUI Character Consistency write a prompt
Set the Models ThinkDiffusion StableDiffusion ComfyUI Character Consistency set the models
Check the Generation Settings ThinkDiffusion StableDiffusion ComfyUI Character Consistency check the generation settings
Check the Default Settings for Image Enlargement and Face Repair ThinkDiffusion StableDiffusion ComfyUI Character Consistency check the upscale image settings
Check the Generated Image and its Poses ThinkDiffusion StableDiffusion ComfyUI Character Consistency check the generated output and its poses
💡
When using the workflow, just stick to the settings as seen on the image in Procedures.
💡
RandomNoise can be used using random seed but in Upscale and Face Detailer needs to be on Fixed. Otherwise, it may have effect consistency.

Examples

Prompt Settings

Image Source - Use the provided pose sheet file
Prompt - Write a prompt that describe what character you want to be
Base model - flux1-dev-fp8.safetensors
Clip model - t5xxl_fp8_e4m3fn.safetensors, clip_l.safetensors
VAE model - ae.safetensors or ae.sft
Controlnet Model - FLUX.1-dev-ControlNet-Union-Pro or diffusion_pytorch_model.safetensors
Upscale Model - 4x-ClearRealityV1.pth
Face Model - face_yolov8m.pt

💡
We're using the same settings through out the examples provided. The only difference is the prompt that are used are need to be changed according to the character you want to be.

If you’re having issues with installation or slow hardware, you can try any of these workflows on a more powerful GPU in your browser with ThinkDiffusion.

If you enjoy ComfyUI and you want to test out creating awesome animations, then feel free to check out this AnimateDiff tutorial here. Happy creating!