0:00
/0:03

What is Style Transfer in AI?

Style transfer in AI is a way to take the look or style of one image and apply it to another. For example, imagine you have a photo of a city's skyline and a famous painting like Van Gogh's "Starry Night." With AI style transfer, you can change the photo so it looks like Van Gogh painted it, with his special swirls and colors. This technology mixes the details of your photo with the style of the painting to create something new and beautiful.

In this guide, we will explain some basic ideas behind this method and show you how to do it yourself in ComfyUI. We'll be transforming a dancing video of a human into a dancing noodle dish.

Big fat special shout out to the original creator of this concept, the talented James Gerde! Please check out his incredible work here.

ThinkDiffusion StableDiffusion ComfyUI Transform Videos with AI shows a theory about transfer style
The purpose of style transfer is to generate a new image that has both the semantic content of a content image and the style of a reference style image.

Why use ComfyUI?

0:00
/0:06
  • User-Friendly Workflow Sharing: Download workflows with preset settings so you can get straight to work.
  • Creative Applications: Ideal for artists, designers and marketers who want to create unique visuals and engaging content.
  • Democratized Creativity: ComfyUI uses powerful open source AI, allowing anyone to create stunning, style-rich images and videos quickly.


One-Time Setup


Step 1: Load the ComfyUI workflow into ThinkDiffusion


Download the workflow and drag & drop or 'Load' it into your ComfyUI window, whether locally or on ThinkDiffusion. If you're using ThinkDiffusion, use the ComfyUI machine. It's necessary to use at minimum the Turbo 24gb machine, but we do recommend the Ultra 48gb machine.

Step 2: Install Custom Nodes

If there are red nodes in the workflow, it means that the workflow lacks the certain required nodes. Install the custom nodes in order for the workflow to work.

  1. Go to ComfyUI Manager  > Click Install Missing Custom Nodes
ThinkDiffusion StableDiffusion ComfyUI Transform Videos with AI shows an direction on how install missing custom nodes
  1. Check the list below if there's a list of custom nodes that needs to be installed and click the install.
ThinkDiffusion StableDiffusion ComfyUI Transform Videos with AI shows an area where to select a missing custom nodes
πŸ’‘
If you prefer the installation using the Git URL, you can search for them on Google and include Github.

Step 3: Install Models

Download the recommended models (see list below) using the ComfyUI manager and go to Install models. Refresh or restart the machine after the files have downloaded.

  1. Go to ComfyUI Manager  > Click Install Models
ThinkDiffusion StableDiffusion ComfyUI Transform Videos with AI shows how to install model
  1. When you find the exact model that you're looking for, click install and make sure to press refresh when you are finished.
ThinkDiffusion StableDiffusion ComfyUI Transform Videos with AI shows where to search and install the available model

Model Path Source

The easier way to install the models, is to 'Copy Path' from the table below and paste the URL into ThinkDiffusion MyFiles using the 'upload' option. Use the 'Guide Table' to find the directory for each model.

Model Name Model Link Address
dreamshaper_8LCM.safetensors
πŸ“‹ Copy Path
vae-ft-mse-840000-ema-pruned.safetensors
πŸ“‹ Copy Path
Ghibli_v6.safetensors
πŸ“‹ Copy Path
add_detail.safetensors
πŸ“‹ Copy Path
Ip-adapter-plus_sd15.safetensors PLUS (High Strength)
πŸ“‹ Copy Path
aid-RUN-Motion_Lora.safetensors
πŸ“‹ Copy Path
AnimateLCM_sd15_t2v.ckpt
πŸ“‹ Copy Path
control_v1p_sd15_qrcode_monster.safetensors
πŸ“‹ Copy Path
control_v11p_sd15_lineart.pth
πŸ“‹ Copy Path
BiRefNet-DIS_ep580.pth
πŸ“‹ Copy Path
BiRefNet-ep480.pth
πŸ“‹ Copy Path
Swin_base_patch4_window12_384_22kto1k.pth
πŸ“‹ Copy Path
swin_large_patch4_window12_384_22kto1k.pth
πŸ“‹ Copy Path
4x_NMKD-Siax_200k.pth
πŸ“‹ Copy Path

Guide Table for Upload

Recommended Models

Node’s Value Name

Node

ThinkDiffusion Upload File Directory

dreamshaper_8LCM.safetensors

ckpt_name

Efficient Loader

…/comfyui/models/checkpoints/

vae-ft-mse-840000-ema-pruned.safetensors

vae_name

Efficient Loader

…/comfyui/models/vae/

Ghibli_v6.safetensors

lora_name

Efficient Loader

…/comfyui/models/loras/

add_detail.safetensors

lora_name_1

LoRA Stacker

…/comfyui/models/loras/

Ip-adapter-plus_sd15.safetensors  PLUS (High Strength)

preset

IPAdapter Unified Loader

…/comfyui/models/ipadapter/

aid-RUN-Motion_Lora.safetensors

name

Load AnimateDiff LoRA

…/comfyui/models/animatediff_motion_lora/

AnimateLCM_sd15_t2v.ckpt

model_name

Load AnimateDiff Model

…/comfyui/models/animatediff_models/

control_v1p_sd15_qrcode_monster.safetensors

control_net_name

Load Advanced ControlNet Model

…/comfyui/models/controlnet/

control_v11p_sd15_lineart.pth

control_net_name

Load Advanced ControlNet Model

…/comfyui/models/controlnet/

BiRefNet-DIS_ep580.pth
BiRefNet-ep480.pth
Swin_base_patch4_window12_384_22kto1k.pth
swin_large_patch4_window12_384_22kto1k.pth

device > auto

BiRefNet Segmentation

…/comfyui/models/BiRefNet/

4x_NMKD-Siax_200k.pth

model_name

Load Upscale Model

…/comfyui/models/upscale_models/

Tips

πŸ’‘
Refresh or restart the machine after uploading the files in ThinkDiffusion My Files.
πŸ’‘
If you are getting an error message CLIP Vision Model not found: /ComfyUI/models/clip_vision folder, CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors you can download another, like CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors - make sure to reselect the new model in the dropdown.
πŸ’‘
If you are getting an error message IPAdapter model not found: You are likely missing the IPAdapter model. In ComfyUI Manager Menu click Install Models - Search for ip-adapter_sd15_vit-G.safetensors and click Install.
πŸ’‘
You can upload models by copying the link address of download button/icon from Civitai or Huggingface and paste into the Upload section of ThinkDiffusion My Files using the copied URL.
If you prefer to upload from your Google Drive, follow the instructions here UPLOAD HELP

Step 4: Run the workflow

Now that the hard work is out of the way, let's get creative. You need to follow the steps from top to bottom. The workflow is a one-click process after everything has been set up.

πŸ’‘
You can use the workflow for Low VRAM / RAM if you need test a style transfer into the image, which has a fast generation. Otherwise, go for the main workflow if you have a high specs machine or using Ultra mode
Steps Description / Impact Default / Recommended Values Required Change
Load a Dance Video or Dance Movement Upload a video which shows a dance style or body movement. It will create a video mask using this video. Set your desired limit for frame_load_cap. Default is 0. ThinkDiffusion StableDiffusion ComfyUI Transform Videos with AI shows an uploaded dance video for video mask YES
Load a 1 image for the background and 1 image for the foreground. The Load Image node can be used to to load an image. Images can be uploaded by starting the file dialog or by dropping an image onto the node. Once the image has been uploaded they can be selected inside the node. I needs 2 images; 1 for background and 1 for the subject or character. ThinkDiffusion StableDiffusion ComfyUI Transform Videos with AI shows an uploaded image for background and foreground YES
Check the video settings for dance mask This is an area of nodes where you can set the dimension of upcale image before it will process for generation. Recommended upscale methods are lanczos and crop at the center. Otherwise you can disable for a wide angle view of output. ThinkDiffusion StableDiffusion ComfyUI Transform Videos with AI shows a video mask settings which you can adjust the dimension of video mask
Check the Efficient Loader A collection of ComfyUI custom nodes to help streamline workflows and reduce total node count. Set it to the recommended default value for the checkpoint, vae and lora. When writing a prompt, put descriptive words on what should be the appearance of the subject or how it appeared while moving. ThinkDiffusion StableDiffusion ComfyUI Transform Videos with AI shows the efficient loader which shows text prompt, checkpoint, lora and vae
Check the IPAdapter Plus settings and adjust the weight when necessary These are the groups for ipadapter, a novel approach for enhancing text-to-image diffusion models with the capability to use image prompts in image generation tasks. IP-Adapter aims to address the shortfalls of text prompts which often require complex prompts to generate desired images. ThinkDiffusion StableDiffusion ComfyUI Transform Videos with AI shows the ipadapter settings which can just the weight necessary YES
Check the AnimateDiff settings and you may adjust the strength of animation These group of nodes enhances the generation by integrating improved motion models. The only change that you can adjust is the strength of motion lora. ThinkDiffusion StableDiffusion ComfyUI Transform Videos with AI shows the  animatediff settings and weight can be adjusted here
Check and adjust the controlnet strength while testing the prompt. These groups of nodes can be used to provide further visual guidance to a diffusion model. This is an essential area because it will control the appearance of your output. ThinkDiffusion StableDiffusion ComfyUI Transform Videos with AI shows the controlnet settings which you can adjust them using the strength value YES
Check the KSampler and Latent Image Size and set it to preferred size This is where your images are generated. The KSampler uses the provided model and positive and negative conditioning to generate a new version of the given latent. ThinkDiffusion StableDiffusion ComfyUI Transform Videos with AI shows the ksampler  with its adjustment settings for latent size
Check the Video Combine which shows your video This node merges a sequence of images into a cohesive video file. ThinkDiffusion StableDiffusion ComfyUI Transform Videos with AI shows a video combine preview
(OPTIONAL: for High RAM/VRAM workflow) Check the settings of additional workflow for video refinement These groups of nodes are settings for your upscale and video interpolation. See the recommended values such upscale model, steps, cfg, denoise, etc ThinkDiffusion StableDiffusion ComfyUI Transform Videos with AI shows an addtional nodes for upscale and interpolated video
(OPTIONAL: for High RAM/VRAM workflow) Check the preview of the video combine These groups of nodes are preview for your upscale and interpolated video. ThinkDiffusion StableDiffusion ComfyUI Transform Videos with AI shows a preview for upscaled and interpolated video

Tips

πŸ’‘
Use the workflow for low RAM/VRAM if your machine has a low specs. Otherwise, use the workflow for high RAM/VRAM which generates an additional upscaled and interpolated videos.
πŸ’‘
For new comfyUI users, check your input model, like Lora, checkpoint, controlnet, animatediff, and upscale model, and don't forget to update your comfyUI all.
πŸ’‘
Don’t let my reference examples stop you from exploring. In fact, any object can dance, such as water, rocks, clothes, cars, particles, etc.
πŸ’‘
Use QRCode Controlnet to control appearance and action, use IPAdapter to control the content and style of conversion.
πŸ’‘
The two workflows use the same base. There were only added nodes for refining the video output.

Examples

You can check the examples below together with its prompt settings.

Cactus Dance Settings

Prompt: Translucent cactus, glistening with spikes, spiraling outwards in an green color dance, cactus made of scary thorns, glowing with an inner azure light, surrounded by a faint smoke
Steps - 10, cfg 1.5, lcm, sgm_uniform
qrcode - 0.4 , lineart - 0.4

Ice Cream Dance Settings

Prompt: delicious ice cream, dripping with delicious treat, spiraling outwards in an chilling dance, ice cream made of sweet flavor, glowing with an inner azure light, surrounded by frozen effect
Steps - 10, cfg 1.55, lcm, sgm_uniform
qrcode - 0.4, lineart - 0.5

Fire Dance Settings

Prompt: burning flame, blazing with fire aura, spiraling outwards in an intricate dance, made of delicate flame burst, glowing with an inner fire and smoke, surrounded by a smoldering amber, background is snow landscape
Steps - 10, cfg 1.55, lcm, sgm_uniform
qrcode - 0.4, lineart - 0.4

Resources

Download the Input and Output Files Here

Transform Videos with AI - Google Drive

It contains of images and videos for you to test in the workflow.


Frequently Ask Questions

How can we define Style Transfer in AI?

This process creates a new image that preserves the key elements of the original photo but mimics the artistic appearance of the second image. This technology is commonly used in digital art and photo/video editing to generate unique and striking visuals.

How to use ComfyUI with Civitai on Mac?

We recommend using ThinkDiffusion so you don't have to install locally on your mac, but here are some quick steps on installing on a Mac computer.
To use ComfyUI with Civitai on a Mac, first install Python, clone the ComfyUI repository, and set up a virtual environment. Install dependencies using pip, download models from Civitai, and configure ComfyUI to recognize them. Launch ComfyUI, access it via your browser, and load the Civitai models. Upload your content, apply the desired style transfer, then save and export your final images or videos.

How to create a dancing noodles video with AI?

Follow this guide and you can create your own dancing noodle video with ThinkDiffusion. To produce a video of dancing noodles using artificial intelligence, begin by generating a noodle dance using the picture of a noodle and background of ramen bowl within the ComfyUI interface. Subsequently, employ an AI-driven animatediff, ipadapter and controlnet nodes to imbue the noodles with dancing movements. Assemble the animations using a video editing software like Adobe, which allows you to include backgrounds, synchronize music and finetune your result.

If you’re having issues with installation or slow hardware, you can try any of these workflows on a more powerful GPU in your browser with ThinkDiffusion.

If you enjoy ComfyUI and you want to test out creating awesome animations, then feel free to check out this AnimateDiff tutorial here. And have fun out there with your noodles!