Nikos Athanasiou* · Mathis Petrovich* · Michael J. Black · Gül Varol
You can download a version of the sinc synthetic data here! There is also a demo to explore the in Hugging Face 🤗!
This implementation:
- Instruction on how to prepare the datasets used in the experiments.
- The training code:
- For SINC method
- For the baselines
- For the ablations done in the paper
- Standalone script to compose different motions from AMASS automatically and create synthetic data from existing motions
Details
SINC has been implemented and tested on Ubuntu 20.04 with python >= 3.10.Clone the repo:
git clone https://github.com/athn-nik/sinc.git
After it do this to install DistillBERT:
cd deps/
git lfs install
git clone https://huggingface.co/distilbert-base-uncased
cd ..
Install the requirements using virtualenv
:
# pip
source scripts/install.sh
You can do something equivalent with conda
as well.
‼️ ⚠️ You can directly download the data from this link and use them!
Details
Download the data from AMASS website. Then, run this command to extract the amass sequences that are annotated in babel:
python scripts/process_amass.py --input-path /path/to/data --output-path path/of/choice/default_is_/babel/babel-smplh-30fps-male --use-betas --gender male
Download the data from TEACH website, after signing in. The data SINC was trained was a processed version of BABEL. Hence, we provide them directly to your via our website, where you will also find more relevant details. Finally, download the male SMPLH male body model from the SMPLX website. Specifically the AMASS version of the SMPLH model. Then, follow the instructions here to extract the smplh model in pickle format.
The run this script and change your paths accordingly inside it extract the different babel splits from amass:
python scripts/amass_splits_babel.py
Then create a directory named data
and put the babel data and the processed amass data in.
You should end up with a data folder with the structure like this:
data
|-- amass
| `-- your-processed-amass-data
|
|-- babel
| `-- babel-teach
| `...
| `-- babel-smplh-30fps-male
| `...
|
|-- smpl_models
| `-- smplh
| `--SMPLH_MALE.pkl
Be careful not to push any data! Then you should softlink inside this repo. To softlink your data, do:
ln -s /path/to/data
You can do the same for your experiments:
ln -s /path/to/logs experiments
Then you can use this directory for your experiments.
To start training after activating your environment. Do:
python train.py experiment=baseline logger=none
Explore configs/train.yaml
to change some basic things like where you want
your output stored, which data you want to choose if you want to do a small
experiment on a subset of the data etc.
You can disable the text augmentations and using single_text_desc: false
in the
model configuration file. You can check the train.yaml
for the main configuration
and this file will point you to the rest of the configs (eg. model
refers to a config found in
the folder configs/model
etc.).
Prior to running this code for MLD please create and activate an environment according to their repo. Please do the 1. Conda Environment
and 2. Dependencies
out of the steps in their repo.
python train.py experiment=some_name run_id=mld-synth0.5-4gpu model=mld data.synthetic=true data.proportion_synthetic=0.5 data.dtype=seg+seq+spatial_pairs machine.batch_size=16 model.optim.lr=1e-4 logger=wandb sampler.max_len=150
Details
Given that you have downloaded and processed the data, you can create spatial compositions from gropundtruth motions of BABEL subset from AMASS using a standalone script:python compose_motions.py
Details
After training, to sample and evaluate a model which has been stored in a folder /path/to/experiment
python sample.py folder=/path/to/experiment/ ckpt_name=699 set=small
python eval.py folder=/path/to/experiment/ ckpt_name=699 set=small
- You can change the
jointstype
for the sampling script to output and save rotations and translation by settingjoinstype=rots
. - By setting the
set=full
you will obtain the results on the full BABEL validation set.
You can calculate the TEMOS score using:
python sample_eval_latent.py folder=/is/cluster/fast/nathanasiou/logs/space/single-text-baselines/rs_only/babel-amass/ ckpt_name=699 set=small
or for model trained using MLD:
python mld_temos.py folder=/is/cluster/fast/nathanasiou/logs/sinc/sinc-arxiv/mld-wo-synth/babel-amass ckpt_name=399 set=small
@inproceedings{SINC:ICCV:2022,
title={{SINC}: Spatial Composition of {3D} Human Motions for Simultaneous Action Generation},
author={Athanasiou, Nikos and Petrovich, Mathis and Black, Michael J. and Varol, G\"{u}l },
booktitle = {ICCV},
year = {2023}
}
This code is available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using this code you agree to the terms in the LICENSE. Third-party datasets and software are subject to their respective licenses.
Many part of this code were based on the official implementation of TEMOS.
This code repository was implemented by Nikos Athanasiou and Mathis Petrovich.
Give a ⭐ if you like.
For commercial licensing (and all related questions for business applications), please contact [email protected].