Skip to content

Extracts human motion in video and save it as bvh mocap file.

Notifications You must be signed in to change notification settings

OSS-Games/video2bvh

 
 

Repository files navigation

video2bvh

video2bvh extracts human motion in video and save it as bvh mocap file.

demo

Introduction

video2bvh consists of 3 modules: pose_estimator_2d, pose_estimator_3d and bvh_skeleton.

  • pose_estimator_2d: Since the 3D pose estimation models we used are 2-stage model(image-> 2D pose -> 3D pose), this module is used for estimate 2D human pose (2D joint keypoint position) from image. We choose OpenPose as the 2d estimator. It can detect 2D joint keypoints accurately at real-time speed.
  • pose_estimator_3d: We provide 2 models to estimate 3D human pose.
    • 3d-pose-baseline: This model is proposed by Julieta Martinez, Rayat Hossain, Javier Romero, and James J. Little in ICCV 2017.[PAPER][CODE]. It uses single frame 2d pose as input. Its original implementation is based on TensorFlow, and we reimplemented it using PyTorch.
    • VideoPose3D: This model is proposed by Dario Pavllo, Christoph Feichtenhofer, David Grangier, and Michael Auli in CVPR 2019.[PAPER][CODE]. It uses 2d pose sequence as input. We slightly modificate the original implementation.
  • bvh_skeleton: This module includes the function that estimates skeleton information from 3D pose, converts 3D pose to joint angle and write motion data to bvh file.

Dependencies

Pre-trained models

The original models provided by 3d-pose-baseline and VideoPose3D use Human3.6M 17-joint skeleton as input format (See bvh_skeleton/h36m_skeleton.py), but OpenPose's detection result are 25-joint (See OpenPose output.md). So, we trained these models using 2D pose estimated by OpenPose in Human3.6M dataset from scratch.

The training progress is almostly same as the originial implementation. We use subject S1, S5, S6, S7, S8 as the training set, and S9, S11 as the test set. For 3d-pose-baseline, the best MPJPE is 64.12 mm (Protocol #1), and for VideoPose3D the best MPJPE is 58.58 mm (Protocol #1). The pre-trained models can be downloaded from following links.

After you download the models folder, place or link it under the root directory of this project.

Quick Start

Open demo.ipynb in Jupyter Notebook and follow the instructions. As you will see in the demo.ipynb, video2bvh converts video to bvh file with 3 main steps.

1. Estimate 2D pose from video

2. Estimate 3D pose from 2D pose

3. Convert 3D pose to bvh motion capture file

Retargeting

Once get the bvh file, you can easily retarget the motion to other 3D character model with existing tools. The girl model we used is craeted using MakeHuman, and the demo is rendered with Blender. The MakeWalk plugin helps us do the retargeting work.

TODO

  • Add more 2D estimators, such as HRNet and PoseResNet.
  • Smoothing 2D pose and 3D pose.
  • Real-time demo.

About

Extracts human motion in video and save it as bvh mocap file.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 88.9%
  • Jupyter Notebook 11.1%