0:00
/0:04

Ever feel like creating videos takes forever? You're not alone! That's where LTX and ComfyUI come in, making video creation smoother and faster. LTX speeds up the video-making process so you can focus on what really matters — telling your story and connecting with your audience. In this guide, we'll focus on img2vid generation and we’ll explore all the features that make LTX a game-changer. Let’s jump in!

How to run LTX in ComfyUI

Installation guide

Custom Nodes

If there are red nodes in the workflow, it means that the workflow lacks the certain required nodes. Install the custom nodes in order for the workflow to work.

  1. Go to ComfyUI Manager  > Click Install Missing Custom Nodes
  1. Check the list below if there's a list of custom nodes that needs to be installed and click the install.

Is your custom nodes list empty?

  1. If your list of missing custom nodes is empty, you might already have LTX installed but an older version. Go to Custom Nodes Manager.
  2. Search for LTX in the list and Update ComfyUI-LTXVideo. Click Restart and Click Refresh to refresh Node Definition in workspace. Open the workflow again and your issue should be resolved.

Models

For this guide you'll need 3 models. 1 will be auto-downloaded and 2 will have to be downloaded manually.

💡
Download the necessary models (ltx-video-2b-v0.9.1.safetensors & t5xxl_fp8_e4m3fn.safetensors) using the ComfyUI Model Manager.
  1. Go to ComfyUI Manager  > Click Model Manager
  1. When you find the exact model that you're looking for, click install and make sure to press refresh when you are finished.

Model Path Source

Use the model path source if you prefer to install the models using model's link address and paste into ThinkDiffusion MyFiles using upload URL.

Model Name Model Link Address ThinkDiffusion Upload Directory
Florence 2 model
Auto Downloaded by Node
Auto Uploaded by Node
ltx-video-2b-v0.9.1.safetensors
📋 Copy Path
...comfyui/models/checkpoint/
t5xxl_fp8_e4m3fn.safetensors
📋 Copy Path
...comfyui/models/clip/

Step-by-step Workflow Guide

Steps Recommended Nodes
1. Load an Image ltx-v2-load-image.png
2. Set the Size

Set the recommended size by 768x512 resolution. The model works best on resolutions under 720.
ThinkDiffusion-StableDiffusion-ComfyUI-LTX-new-resize-image.png
3. Set the Models ltx-v2-set-the-models.png
4. Check the Caption and Prompt

You don't need to write anything in the 3rd text box because it is already provided by Florence.

Image captioning by Florence doesn't address the kind of motion that happens in the scenes. Consider adding this part manually by writing in 1st text box or 2nd text box.
ltx-v2-set-the-caption-and-prompt.png
5. Set the Generation Settings

Follow the settings as seen on the image.
ltx-v2-set-the-generation-settings.png
6. Check the Generated Video ltx-v2-check-generated-video.png

Examples

0:00
/0:01

Man playing a guitar.

0:00
/0:04

Illustration of running motorcycle.

0:00
/0:04

Drone view from above.

0:00
/0:01

Man relaxing at the port.


If you’re having issues with installation or slow hardware, you can try any of these workflows on a more powerful GPU in your browser with ThinkDiffusion.

If you enjoy ComfyUI and you want to test out creating awesome animations, then feel free to check out this AnimateDiff tutorial here. And have fun out there with your videos!