- Python >= 3.8
- PyTorch >= 1.9
- Tensorboard
- matplotlib
- tqdm
- argparse
We adapt the data preprocessing from GSPS.
- We follow the data preprocessing steps (DATASETS.md) inside the VideoPose3D repo.
- Given the processed dataset, we further compute the multi-modal future for each motion sequence. All data needed can be downloaded from Google Drive and place all the dataset in
data
folder inside the root of this repo.
We have used the following commands for training the network on Human3.6M or HumanEva-I with skeleton representation:
python train_nf.py --cfg [h36m/humaneva]
python main.py --cfg [h36m/humaneva]
To test on the pretrained model, we have used the following commands:
python main.py --cfg [h36m/humaneva] --mode test --iter 500
For visualizing from a pretrained model, we have used the following commands:
python main.py --cfg [h36m/humaneva] --mode viz --iter 500
This code is based on the implementations of GSPS.