Code for ICMR 2024 paper "CoDancers: Music-Driven Coherent Group Dance Generation"
To set up the necessary environment for running the project, follow these steps:
-
Create a new conda environment:
conda create -n CoD_env python=3.8 conda activate CoD_env
-
Install PyTorch and dependencies
conda install pytorch torchvision torchaudio cudatoolkit=11.6 -c pytorch -c conda-forge conda install --file requirements.txt
Directly download our preprocessed feature from here into ./data folder.
To test with our pretrained models, please download the weights from here (Google Drive) and place them into ./experiments folder.
After downloading the corresponding data and weights, please move the relevant files to their respective directories.
The file directory structure is as follows:
|-- configs
|-- data
|-- aistpp_music_librosa_3.75fps
|-- aistpp_music_mert_3.75fps
|-- aistpp_test_full_wav
|-- aist_features_zero_start_test
|-- group_kinetic_features
|-- kinetic_features
|-- People_Num
|-- dataset
|-- experiments
|-- cc_motion_gpt
|-- ckpt
|-- sep_vqvae
|-- ckpt
|-- models
|-- utils
|-- querybank
|-- utils
|-- features
Coming soon...
To test the VQ-VAE, use the following command:
python -u main.py --config configs/sep_vqvae.yaml --eval
To test GPT, use the following command:
python -u main_gpt_all.py --config configs/cc_motion_gpt.yaml --eval
After generating the dance in the above step, run the following codes.
To evaluate the VQ-VAE, use the following command:
python ./utils/vqvae_metrics.py
To evaluate the GPT, use the following command:
python ./utils/gpt_metrics.py
For calculating the Trajectory Intersection Frequency (TIF) metric and performing Inverse Kinematics, due to the lengthy computation time, this repository does not provide demonstrations. You can refer to vedo and Pose to SMPL for further information.
If you have any questions, don't hesitate to submit an issue or contact me.
Our code is based on Bailando , and some of the data is provided by AIOZ-GDANCE. We sincerely appreciate for their contributions.
@inproceedings{yang2024codancers,
title={CoDancers: Music-Driven Coherent Group Dance Generation with Choreographic Unit},
author={Yang, Kaixing and Tang, Xulong and Diao, Ran and Liu, Hongyan and He, Jun and Fan, Zhaoxin},
booktitle={Proceedings of the 2024 International Conference on Multimedia Retrieval},
pages={675--683},
year={2024}
}