CVPR, 2024
Xinshun Wang*
·
Zhongbin Fang*
Xia Li
·
Xiangtai Li
·
Mengyuan Liu✉
This is the official PyTorch implementation of the paper "Skeleton-in-Context: Unified Skeleton Sequence Modeling with In-Context Learning" (CVPR 2024).
- [Apr 23, 2024] Code is released.
- [Feb 27, 2024] Paper is accepted by CVPR 2024!
- [Dec 07, 2023] Paper is released and GitHub repo is created.
conda create -n skeleton_in_context python=3.7 anaconda
conda activate skeleton_in_context
pip install -r requirements.txt
There are 2 ways to prepare data:
You can download ready-to-use data here, and unzip the files in data/
.
After you do so, the data/
directory should look like this:
data/
│
├── 3DPW_MC/
│ ├── train/
│ └── test/
│
├── AMASS/
│ ├── train/
│ └── test/
│
├── H36M/
| ├── train/
| └── test/
│
├── H36M_FPE/
| ├── train/
| └── test/
|
├── source_data/
| └── H36M.pkl
|
└── support_data/
Now you are ready to train and evaluate Skeleton-in-Context.
Human3.6M:
Download MotionBERT's Human3.6M data here, unzip to data/source_data/
, and rename it H36M.pkl
. Please refer to MotionBERT for how the Human3.6M data are processed.
AMASS:
Download AMASS data here. The AMASS data directory should look like this:
data/source_data/AMASS/
├── ACCAD/
├── BioMotionLab_NTroje/
├── CMU/
├── EKUT/
├── Eyes_Japan_Dataset/
├── KIT/
├── MPI_Limits/
├── TCD_handMocap/
└── TotalCapture/
3DPW:
Download 3DPW data here. The 3DPW data directory should look like this:
data/source_data/PW3D/
└── sequenceFiles/
├── test/
├── train/
└── validation/
Pre-process:
Pre-process the data by running the following lines:
python data_gen/convert_h36m_PE.py
python data_gen/convert_h36m_FPE.py
python data_gen/convert_amass_MP.py
python data_gen/convert_3dpw_MC.py
python data_gen/calculate_avg_pose.py
Now you are ready to train and evaluate Skeleton-in-Context.
To train Skeleton-in-Context, run the following command:
CUDA_VISIBLE_DEVICES=<GPU> python train.py --config configs/default.yaml --checkpoint ckpt/[YOUR_EXP_NAME]
To evaluate Skeleton-in-Context, run the following command:
CUDA_VISIBLE_DEVICES=<GPU> python train.py --config configs/default.yaml --evaluate ckpt/[YOUR_EXP_NAME]/[YOUR_CKPT]
For example:
CUDA_VISIBLE_DEVICES=<GPU> python train.py --config configs/default.yaml --evaluate ckpt/pretrained/latest_epoch.bin
MIT License
If you find our work useful in your research, please consider citing:
@article{wang2023skeleton,
title={Skeleton-in-Context: Unified Skeleton Sequence Modeling with In-Context Learning},
author={Wang, Xinshun and Fang, Zhongbin and Li, Xia and Li, Xiangtai and Chen, Chen and Liu, Mengyuan},
journal={arXiv preprint arXiv:2312.03703},
year={2023}
}
This work is inspired by Point-In-Context. The code for our work is built upon MotionBERT. Our tribute to these excellent works, and special thanks to the following works: siMLPe, EqMotion, STCFormer, GLA-GCN.