0:00
/0:06

Prompt: A view of the forest in a upward camera view.

💡
Credits to the awesome Benji for this workflow.

Original Link - https://www.youtube.com/watch?v=b69Qs0wvaFE&t=311s

Uni3C is a ComfyUI model by Alibaba that converts static images into dynamic videos by transferring camera movements from reference videos. This tutorial covers complete setup and usage.

What is Uni3C?

0:00
/0:04

Source: The Uni3C

Uni3C is a unified 3D-enhanced framework integrated into ComfyUI that enables precise, simultaneous control over both camera motion and human animation within video generation workflows. By leveraging a lightweight plug-and-play control module, Uni3C extracts and transfers motion—such as camera movements and character actions—from reference videos directly onto new scenes or images, eliminating the need for complex manual rigging or joint annotation.

This technology is especially valuable for digital artists, animators, filmmakers, virtual avatar creators, educators, and anyone in content creation seeking to bring static visuals to life with realistic, controllable movement—all with the creative freedom and modular workflow of ComfyUI.

0:00
/0:06

Prompt: A woman in a man sitting and it follows the camera angle.

Whether you’re just starting out or looking to elevate your animation game, let’s explore together how a few simple steps can transform your scenes from static to spectacular!

Download Workflow

Installation guide

  1. Download the workflow file
  2. Open ComfyUI (local or ThinkDiffusion)
  3. Drag the workflow file into the ComfyUI window
  4. If you see red nodes, install missing components:
  • ComfyUI Manager > Install Missing Custom Nodes

Verified to work on ThinkDiffusion Build: June 27, 2025

ComfyUI v0.3.44 with the use Wan2_1-I2V-14B-720P_fp8_e4m3fn.safetensors and
Wan21_Uni3C_controlnet_fp16.safetensors

Note: We specify the build date because ComfyUI and custom node versions updated after this date may change the behavior or outputs of the workflow.

Minimum Machine Size: Ultra

Use the specified machine size or higher to ensure it meets the VRAM and performance requirements of the workflow

💡
Download the workflow and drag & drop it into your ComfyUI window, whether locally or on ThinkDiffusion. If you're using ThinkDiffusion, minimum requirement is the Turbo 24gb machine, but we do recommend the Ultra 48gb machine.

Custom Nodes

If there are red nodes in the workflow, it means that the workflow lacks the certain required nodes. Install the custom nodes in order for the workflow to work.

  1. Go to the ComfyUI Manager  > Click Install Missing Custom Nodes
  1. Check the list below if there's a list of custom nodes that needs to be installed and click the install.

Required Models

For this guide you'll need to download these 6 recommended models.

1. Wan2_1-I2V-14B-720P_fp8_e4m3fn.safetensors
2. umt5-xxl-fp16.safetensors
3. Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
4. clip_vision_h.safetensors
5. Wan2_1_VAE_bf16.safetensors
6. Wan21_Uni3C_controlnet_fp16.safetensors
  1. Go to ComfyUI Manager  > Click Model Manager
  1. Search for the models above and when you find the exact model that you're looking for, click install, and make sure to press refresh when you are finished.

If Model Manager doesn't have them: Use direct download links (included with workflow) and upload through ThinkDiffusion MyFiles > Upload URL. Refer our docs for more guidance on this.

You could also use the model path source instead: by pasting the model's link address into ThinkDiffusion MyFiles using upload URL.

Model Name Model Link Address ThinkDiffusion Upload Directory
Wan2_1-I2V-14B-720P_fp8_e4m3fn.safetensors
📋 Copy Path
.../comfyui/models/diffusion_models/
umt5-xxl-fp16.safetensors
📋 Copy Path
.../comfyui/models/text_encoders/
Wan21_T2V_14B_lightx2v_cfg_step_
distill_lora_rank32.safetensors
📋 Copy Path
.../comfyui/models/loras/
clip_vision_h.safetensors
📋 Copy Path
.../comfyui/models/clip/
Wan2_1_VAE_bf16.safetensors
📋 Copy Path
.../comfyui/models/vae/
Wan21_Uni3C_controlnet_fp16.safetensors
📋 Copy Path
.../comfyui/models/diffusion_models/

Step-by-step Workflow Guide

This workflow was pretty easy to set up and runs well from the default settings. Here are a few steps where you might want to take extra note.

Steps Recommended Nodes
1. Set the Height, Weight and Frames

Set the frames up to 125. You can set the resolution up to 720 or 1080.
2. Load an Input Video for Control Reference

Load an any type of video. The video must have a scene of camera movement.
3. Load an Input Image

Upload an image which serves as the base image for the generation. It works with no subject or even in multiple subject.
4. Set the Models

Set the models as seen on the image.
5. Write Prompt and Check Sampling

Write a simple prompt which support the kind of camera or depth movement. Check the sampling settings as seen on the image.
6. Check the Output

💡
The initial generation isn’t always perfect, so I often need to review the outputs carefully and select the best one. Sometimes, if the results don’t meet my expectations, I find it necessary to rerun the prompt to achieve the desired outcome. This iterative approach helps ensure that I consistently produce high-quality and visually appealing results, even if it takes a few attempts to get everything just right.
💡
In my experience, this method may struggle to capture very small details in the input image. I’ve found it essential to start with a clear, high-quality image that’s taken from a close, direct viewpoint. Using such input not only helps preserve important features but also leads to more accurate and visually appealing results in the final output.

Examples

0:00
/0:06

Prompt: A cat at the grass and follows the camera view.

0:00
/0:06

Prompt: A view of the living and follows the camera view.

0:00
/0:06

Prompt: A view of a woman reading at the table.


Troubleshooting

Red Nodes: Install missing custom nodes through ComfyUI Manager
Out of Memory: Use smaller expansion factors or switch to Ultra machine
Poor Quality: Check input image resolution and adjust kontext strength
Visible Seams: Lower strength and ensure good prompt description

If you’re having issues with installation or slow hardware, you can try any of these workflows on a more powerful GPU in your browser with ThinkDiffusion.