0:00
/0:03
This guide covers ATI (Any Trajectory Instruction) - ByteDance's tool for controlling motion in AI-generated videos. You'll learn what it does, how to set it up in ComfyUI, and how to use it to create videos with precise movement control.


ATI takes an image and user-drawn motion paths, then generates video following those exact trajectories. Instead of describing movement in text prompts, you draw where things should go.

How it works at a quick high-level:

  • Load an image
  • Draw motion paths on it using the Spline Editor
  • ATI generates video following your paths
  • Control objects, people, or camera movement

This replaces older methods that needed separate tools for different motion types. Let's deep dive into details now.

What is ATI?

0:00
/0:32

Source: Bytedance ATI

ATI, or Any Trajectory Instruction, is a new framework for controllable video generation that lets users guide how objects, people, or even the camera move in a video by simply drawing or specifying motion paths (trajectories) on an image. 

Source: Bytedance ATI
💡
To generate videos, ATI starts by taking an image along with user-defined trajectories. These point-wise motion paths are integrated into the latent conditioning of the model, allowing for highly flexible control over the resulting animation. The DiT model then denoises the latent representation, and the final video is decoded from this processed data. This approach enables ATI to produce videos with adaptable motion, tailored to the user’s input trajectories.

Unlike older methods that needed separate tools for different kinds of motion, ATI combines everything into one system: you can control local deformations, full object movements, or camera angles—all at once—by setting keypoints and their paths. These user-defined trajectories are injected into powerful AI video models, ensuring the generated video follows your instructions smoothly and realistically.

0:00
/0:03

ATI makes it easy for anyone to create videos with precise, custom motion, opening up new creative possibilities and giving users much more control over how their videos look and move.

Buckle up and get ready to explore a new era where your creative journey is as exciting as the destination!

How to Use ATI for Controlled AI Video Generation

Installation guide

  1. Download the workflow file
  2. Open ComfyUI (local or ThinkDiffusion)
  3. Drag the workflow file into the ComfyUI window
  4. If you see red nodes, install missing components:
    • ComfyUI Manager > Install Missing Custom Nodes

Verified to work on ThinkDiffusion Build: June 6, 2025

ComfyUI v0.3.40 with the use Wan2_1-I2V-ATI-14B_fp8_e4m3fn.safetensors model

Note: We specify the build date because ComfyUI and custom node versions updated after this date may change the behavior or outputs of the workflow.

💡
Download the workflow and drag & drop it into your ComfyUI window, whether locally or on ThinkDiffusion. If you're using ThinkDiffusion, minimum requirement is the Turbo 24gb machine, but we do recommend the Ultra 48gb machine.

Custom Nodes

If there are red nodes in the workflow, it means that the workflow lacks the certain required nodes. Install the custom nodes in order for the workflow to work.

  1. Go to the ComfyUI Manager  > Click Install Missing Custom Nodes
  1. Check the list below if there's a list of custom nodes that needs to be installed and click the install.

Required Models

For this guide you'll need to download these 5 recommended models.

1. Wan2_1-I2V-ATI-14B_fp8_e4m3fn.safetensors
2. umt5-xxl-enc-bf16.safetensors
3. Wan2_1_VAE_fp32.safetensors
4. clip_vision_h.safetensors
5. detailz-wan.safetensors
  1. Go to ComfyUI Manager  > Click Model Manager
  1. Search for the models above and when you find the exact model that you're looking for, click install, and make sure to press refresh when you are finished.

If Model Manager doesn't have them: Use direct download links (included with workflow) and upload through ThinkDiffusion MyFiles > Upload URL. Refer our docs for more guidance on this.

You could also use the model path source instead: by pasting the model's link address into ThinkDiffusion MyFiles using upload URL.

Model Name Model Link Address ThinkDiffusion Upload Directory
Wan2_1-I2V-ATI-14B_fp8_e4m3fn.safetensors
📋 Copy Path
.../comfyui/models/diffusion_models/
umt5-xxl-enc-bf16.safetensors
📋 Copy Path
.../comfyui/models/text_encoders/
Wan2_1_VAE_fp32.safetensors
📋 Copy Path
.../comfyui/models/vae/
clip_vision_h.safetensors
📋 Copy Path
.../comfyui/models/clip_vision/
detailz-wan.safetensors
📋 Copy Path
.../comfyui/models/lora/

Step-by-step Workflow Guide

This workflow was pretty easy to set up and runs well from the default settings. Here are a few steps where you might want to take extra note.


There are 2 Stages that you need to remember.
Stage 1
- Pre-setup first running the prompt 1st before you enable the group nodes of Stage 2.
Stage 2 - Once the pre-setup has been laid, enable the Stage 2 group nodes, set the necessary settings and run the prompt

Important: Run both stages every time you use a new image.

Whenever I load a new image, I make sure to repeat both Stage 1 and Stage 2 of the process. This ensures that each image goes through the necessary initial steps, maintaining consistency and quality throughout my workflow. By following this routine, I can reliably prepare and enhance every image before moving on to the next phases of processing.
Steps Recommended Nodes
1. Pre-Setup Load Image
(Stage 1)


Load a high quality image. Run the prompt 1st in order for the resized image to be applied to Spline Editor. Set your desired trajectory path. Take note that Stage 2 nodes must be disabled.
wan ati step 1.png
2. Set the Models
(Stage 2)


Enable the Stage 2 group nodes. Set the required models as seen on the image.
wan ati step 2 edited.png
3. Write the Prompt
(Stage 2)


Write a prompt. The prompt should contains words that describe what subject is, the movement and its settings or background.
wan ati step 3 EDITED.png
4. Check the Sampling
(Stage 2)


Check the sampling settings. Settings must be the same as seen on the image. You can play with steps. ATI model is only compatible with 81 frames on 16fps.
wan ati step 4 EDITED.png
5. Check the Output
(Stage 2)


Check the generated output. If you are not satisfied with the results. You can set the trajectory path again by adjust its length, curve or as a fixed and rerun the prompt.
wan ati step 5 EDITED.png
I always make sure to use a high-quality, high-resolution image when working with the spline editor. When the image is resized within the editor, starting with a higher resolution ensures that the final output maintains excellent visual quality.
If I want to keep a subject or a specific part of the background stationary while applying a trajectory, I simply include a fixed trajectory path in the spline editor. This is represented by a single dot, indicating no movement. This technique gives me precise control over which elements stay in place and which ones move, allowing for more intentional and polished results in my projects.
If the generated result doesn’t align with my expectations or doesn’t follow the intended trajectory, I revisit the trajectory settings to make adjustments. I can modify the length and curve of the path, or even add a fixed segment to better control the movement. By fine-tuning these parameters, I’m able to guide the output more accurately and achieve the desired direction and quality in my project.

Controlling Motion

Making things move: Draw curved lines in the Spline Editor to show motion paths.

Keeping things still: Add a single dot (fixed trajectory) where you want no movement.

If results don't match your paths: Adjust the trajectory length, curve, or add more fixed points. Then regenerate.

Image quality tip: Start with high-resolution images. The Spline Editor resizes them, so higher input = better output.

Examples

0:00
/0:05
0:00
/0:05
0:00
/0:03
0:00
/0:04
0:00
/0:03

If you’re having issues with installation or slow hardware, you can try any of these workflows on a more powerful GPU in your browser with ThinkDiffusion.