Skip to content

ilovepose/Low-resolution-human-pose

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Low-resolution Human Pose Estimation

Introduction

This is an official pytorch implementation of Low-resolution Human Pose Estimation.

This work bridges the learning gap between heatmap and offset field especially for low-resolution human pose estimation.

Illustrating the principle of the proposed CAL

Main Results

Results on COCO val2017 with $64\times48$ input resolution setting

Method Backbone AP Ap.5 AP.75 AP(M) AP(L) AR
HRNet HRNet-W32 29.7 75.7 13.1 29.3 30.7 37.3
HRNet HRNet-W48 32.4 78.3 16.2 31.5 34.0 39.3
UDP HRNet-W32 47.4 80.5 50.6 47.7 47.7 53.8
UDP HRNet-W48 51.0 82.6 55.2 51.4 51.0 57.3
HRNet+SPSR HRNet-W32 50.0 81.6 53.6 53.6 46.0 55.3
HRNet+SPSR HRNet-W48 51.2 82.8 55.5 54.6 47.7 56.4
UDP+SPSR HRNet-W32 52.5 80.8 57.7 56.2 48.3 57.4
UDP+SPSR HRNet-W48 54.1 82.4 59.0 56.9 50.6 59.4
CAL HRNet-W32 58.4 86.6 65.1 57.3 60.5 64.8
CAL HRNet-W48 61.5 88.1 68.7 60.7 63.5 66.3

Note:

  • Flip test is used.
  • +SPSR means using SPSR model to recover super-resolution images for pose estimation.

Development environment

The code is developed using python 3.5 on Ubuntu 16.04. NVIDIA GPUs are needed. The code is developed and tested using 4 NVIDIA 2080TI GPU cards. Other platforms or GPU cards are not fully tested.

Quick start

1. Preparation

1.1 Prepare the dataset

For the MPII dataset, your directory tree should look like this:

$HOME/datasets/MPII
├── annot
├── images
└── mpii_human_pose_v1_u12_1.mat

For the COCO dataset, your directory tree should look like this:

$HOME/datasets/MSCOCO
├── annotations
├── images
│   ├── test2017
│   ├── train2017
│   └── val2017
└── person_detection_results
    ├── COCO_val2017_detections_AP_H_56_person.json
    └── COCO_test-dev2017_detections_AP_H_609_person.json

1.2 Prepare the pretrained models

Your directory tree should look like this:

$HOME/datasets/models
└── pytorch
    ├── imagenet
    │   ├── hrnet_w32-36af842e.pth
    │   ├── hrnet_w48-8ef0771d.pth
    │   ├── resnet50-19c8e357.pth
    │   ├── resnet101-5d3b4d8f.pth
    │   └── resnet152-b121ed2d.pth
    ├── pose_coco
    │   ├── hg4_128×96.pth
    │   ├── hg8_128×96.pth
    │   ├── r50_128×96.pth
    │   ├── r101_128×96.pth
    │   ├── r152_128×96.pth
    │   ├── w32_128×96.pth
    │   ├── w48_128×96.pth
    │   ├── w32_256×192.pth
    │   ├── w32_384×288.pth
    │   └── w48_384×288.pth
    └── pose_mpii
        └── w32_256×256.pth

1.3 Prepare the environment

Setting the parameters in the file prepare_env.sh as follows:

# DATASET_ROOT=$HOME/datasets
# COCO_ROOT=${DATASET_ROOT}/MSCOCO
# MPII_ROOT=${DATASET_ROOT}/MPII
# MODELS_ROOT=${DATASET_ROOT}/models

Then execute:

bash prepare_env.sh

If you like, you can prepare the environment step by step

2. Training and Testing

Testing using model zoo's models [GoogleDrive] [BaiduDrive]

# testing
cd scripts
./tools/run_test.sh ${CONFIG_FILE} ${CHECKPOINT_FILE}

# Training
cd scripts
./tools/run_train.sh ${CONFIG_FILE}

Examples:

Assume that you have already downloaded the pretrained models and place them like the section 1.2.

  1. Testing on COCO dataset using HRNet_W32_64×48 model.
cd scripts
bash run_test.sh experiments/coco/hrnet/w32_64x48_adam_lr1e-3.yaml \
    models/pytorch/pose_coco/w32_64x48.pth
  1. Training on COCO dataset.
cd scripts
bash run_train.sh experiments/coco/hrnet/w32_64x48_adam_lr1e-3.yaml

Citation

If you use our code or models in your research, please cite with:

@article{wang2022low,
  title={Low-resolution human pose estimation},
  author={Wang, Chen and Zhang, Feng and Zhu, Xiatian and Ge, Shuzhi Sam},
  journal={Pattern Recognition},
  volume={126},
  pages={108579},
  year={2022}
}

Discussion forum

ILovePose

Acknowledgement

Thanks for the open-source DARK, UDP and HRNet

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages