This repository provides the official implementation of T2V-Turbo and T2V-Turbo-v2 from the following papers.
T2V-Turbo: Breaking the Quality Bottleneck of Video Consistency Model with Mixed Reward Feedback
Jiachen Li, Weixi Feng, Tsu-Jui Fu, Xinyi Wang, Sugato Basu, Wenhu Chen, William Yang Wang
Paper: https://arxiv.org/abs/2405.18750
Project Page: https://t2v-turbo.github.io/
T2V-Turbo-v2: Enhancing Video Model Post-Training through Data, Reward, and Conditional Guidance Design
Jiachen Li, Qian Long, Jian Zheng, Xiaofeng Gao, Robinson Piramuthu, Wenhu Chen, William Yang Wang
Paper: https://arxiv.org/abs/2410.05677
Project Page: https://t2v-turbo-v2.github.io/
[10.14.2024] Added Replicate Demo and API for T2V-Turbo-v2 .
[10.09.2024] Release the training and inference codes for T2V-Turbo-v2.
[06.24.2024] Release the training codes for T2V-Turbo (VC2).
light wind, feathers moving, she moves her gaze | Pikachu snowboarding | A musician strums his guitar, serenading the moonlit night |
camera pan from left to right, a man wearing sunglasses and business suit | A cat wearing sunglasses at a pool | A raccoon is playing the electronic guitar |
a dog wearing vr goggles on a boat | Pikachu snowboarding | a girl floating underwater |
Mickey Mouse is dancing on white background | light wind, feathers moving, she moves her gaze, 4k | fashion portrait shoot of a girl in colorful glasses, a breeze moves her hair |
pip install accelerate transformers diffusers webdataset loralib peft pytorch_lightning open_clip_torch==2.24.0 hpsv2 image-reward peft wandb av einops packaging omegaconf opencv-python kornia moviepy imageio
pip install flash-attn --no-build-isolation
git clone https://github.com/Dao-AILab/flash-attention.git
cd flash-attention
pip install csrc/fused_dense_lib csrc/layer_norm
conda install xformers -c xformers
Model | Resolution | Checkpoints |
---|---|---|
T2V-Turbo-v2 w/ MG | 320x512 | |
T2V-Turbo-v2 w/o MG | 320x512 | |
T2V-Turbo (VC2) | 320x512 | |
T2V-Turbo (MS) | 256x256 |
We provide local demo codes supported with gradio (For MacOS users, need to set the device="mps" in app.py; For Intel GPU users, set device="xpu" in app.py). Please install gradio
pip install gradio==3.48.0
And Download the model checkpoint of VideoCrafter2.
To play with our T2V-Turbo-v2:
-
Download the unet_mg.pt of our T2V-Turbo-v2.
-
Launch the gradio demo with the following command:
python app.py \
--unet_dir unet_mg.pt PATH_TO_VideoCrafter2_MODEL_CKPT \
--base_model_dir PATH_TO_VideoCrafter2_MODEL_CKPT \
--version v2 \
--motion_gs 0.0
We also provide the unet trained without augmenting teacher ODE solver with guidance. To play with it, please follow the steps below:
-
Download the unet_no_mg.pt of our T2V-Turbo-v2.
-
Launch the gradio demo with the following command:
python app.py \
--unet_dir unet_mg.pt PATH_TO_VideoCrafter2_MODEL_CKPT \
--base_model_dir PATH_TO_VideoCrafter2_MODEL_CKPT \
--version v2 \
--motion_gs 0.0
To play with our T2V-Turbo (VC2), please follow the steps below:
-
Download the
unet_lora.pt
of our T2V-Turbo (VC2) here. -
Launch the gradio demo with the following command:
python app.py \
--unet_dir PATH_TO_UNET_LORA.pt \
--base_model_dir PATH_TO_VideoCrafter2_MODEL_CKPT \
--version v1
To play with our T2V-Turbo (MS), please follow the steps below:
-
Download the
unet_lora.pt
of our T2V-Turbo (MS) here. -
Launch the gradio demo with the following command:
python app_ms.py --unet_dir PATH_TO_UNET_LORA.pt
Run the following command:
bash train_t2v_turbo_v2.sh
To train T2V-Turbo (VC2), first prepare the data and model as below
- Download the model checkpoint of VideoCrafter2 here.
- Prepare the WebVid-10M data. Save in the
webdataset
format. - Download the InternVid2 S2 Model
- Set
--pretrained_model_path
,--train_shards_path_or_url
andvideo_rm_ckpt_dir
accordingly intrain_t2v_turbo_vc2.sh
.
Then run the following command:
bash train_t2v_turbo_v1.sh