0:00
/0:09

Outpaint Your Videos with Wan2.1 VACE!

Ever wished you could magically expand your videos to reveal what’s just out of frame - like adding more scenery, characters, or even special effects?

This cutting-edge AI model lets you effortlessly extend the edges of your videos, filling in new, seamless content that matches your original footage. Whether you’re a filmmaker, content creator, or just someone who loves to play with video magic, Wan VACE video outpainting invites you to break free from the limits of the frame and explore endless creative possibilities.

What is VACE Video Outpainting?

0:00
/0:04

Wan VACE video outpainting lets you expand your videos beyond their original edges, adding new, realistic content around the edges of each frame. Imagine you have a video that’s cropped too tightly—maybe you want to see more of the background or add extra scenery. This process keeps everything smooth and natural, so your expanded video feels just like the real thing. In short, Wan VACE video outpainting makes your videos bigger and more immersive, with no complicated editing required.

0:00
/0:04

Ready to see what’s beyond the edge? Let’s dive in!

How to Run Wan2.1 VACE Video Outpainting

Installation guide

Verified to work on ThinkDiffusion Build: June 3, 2025

ComfyUI v0.3.39 with the use Wan 14B T2V and 14B VACE model

Note: We specify the build date because ComfyUI and custom node versions updated after this date may change the behavior or outputs of the workflow.

💡
Download the workflow and drag & drop it into your ComfyUI window, whether locally or on ThinkDiffusion. If you're using ThinkDiffusion, minimum requirement is the Turbo 24gb machine, but we do recommend the Ultra 48gb machine.

Custom Nodes

If there are red nodes in the workflow, it means that the workflow lacks the certain required nodes. Install the custom nodes in order for the workflow to work.

  1. Go to the ComfyUI Manager  > Click Install Missing Custom Nodes
  1. Check the list below if there's a list of custom nodes that needs to be installed and click the install.

Models

For this guide you'll need to download these 4 recommended models.

1. wan2.1_t2v_14B_fp8_e4m3fn.safetensors
2. Wan2_1-VACE_module_14B_fp8_e4m3fn.safetensors
3. Wan2_1_VAE_bf16.safetensors
4. umt5-xxl-enc-bf16.safetensors
  1. Go to ComfyUI Manager  > Click Model Manager
  1. Search for the models above and when you find the exact model that you're looking for, click install, and make sure to press refresh when you are finished.

Optional Model Path Source

Some of these models may not be available in the model manager.

You could also use the model path source instead: by pasting the model's link address into ThinkDiffusion MyFiles using upload URL.

Model Name Model Link Address ThinkDiffusion Upload Directory
wan2.1_t2v_14B_fp8_e4m3fn.safetensors
📋 Copy Path
.../comfyui/models/diffusion_models/
Wan2_1-VACE_module_14B_fp8_e4m3fn.safetensors
📋 Copy Path
.../comfyui/models/diffusion_models/
Wan2_1_VAE_bf16.safetensors
📋 Copy Path
.../comfyui/models/vae/
umt5-xxl-enc-bf16.safetensors
📋 Copy Path
.../comfyui/models/text_encoders/
Don't use the 1.3B base model and 1.3 VACE model because of its low quality. I tried this models weeks ago and turn out low adherence with the prompt. I suggest to use 14B insted.

Step-by-step Workflow Guide

This workflow was pretty easy to set up and runs well from the default settings. Here are a few steps where you might want to take extra note.

Steps Recommended Nodes
1. Set the Models

Set the models as seen on the image. 14B base model must be paired with 14B VACE model only
2. Load a Video

Load the video with resolution under 1024. If you want 1:1, resize settings must be 512x512. If you want a 16:9, resize settings must set to 854x480 and enable keep proportion and crop center. Frames should under 100
3. Write a Prompt

Write a simple prompt of what is shown in the image.
4. Check the Sampling

Check the settings and set it based on the image.
5. Check the Generated Output

When working with this workflow, I’ve found that it accommodates a variety of video frame formats, including square, horizontal, and vertical layouts. For square videos, I typically set the frame size to 512x512 pixels, disable the keep proportion option, and enable cropping to ensure the output fits the intended dimensions. In the case of horizontal or vertical videos, I adjust the size to 854x480 pixels (for horizontal) or 480x854 pixels (for vertical), while enabling both the keep proportion and center crop features. This approach helps me maintain the correct aspect ratio and composition, resulting in high-quality, visually consistent outputs regardless of the original video orientation.
I have successfully used generated outpainted videos as inputs for the load video function within the workflow. While this approach works well in practice, I noticed that the output video resolution increases significantly compared to the original. To address this, I recommend post-processing the videos by resizing them to your preferred dimensions, which helps maintain compatibility and ensures the final result matches your project requirements.

Examples

0:00
/0:04
0:00
/0:04
0:00
/0:04
0:00
/0:04


If you’re having issues with installation or slow hardware, you can try any of these workflows on a more powerful GPU in your browser with ThinkDiffusion.

Stay tuned for our upcoming Wan workflow tutorial that uses VACE!