Skip to content

NEUdeep/MOC-Detector-Pytorch1.4

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Actions as Moving Points

This repo updated MOC-Detector to pytorch1.4 with cuda10.

  • Please refer to the author for others.
  • The following is the author's original work introduction.

Pytorch implementation of Actions as Moving Points (ECCV 2020).

View each action instance as a trajectory of moving points.

Visualization results on validation set. (GIFs will take a few minutes to load......)

(Note that the relative low scores are due to the property of the focal loss.)



News & Updates

January. 23, 2021 - Update for pytorch1.4 with cuda10.

Aug. 23, 2020 - We upload MOC with ResNet-18 in Backbone.

Aug. 17, 2020 - Now our visualization supports instance level detection results (reflects video mAP).

Aug. 02, 2020 - Update visualization codes. Extract frames from a video and get the detection result (like above gifs).

Jul. 24, 2020 - Update ucf-pretrained JHMDB model and speed test codes.

Jul. 08, 2020 - First release of codes.


MOC Detector Overview

  We present a new action tubelet detection framework, termed as MovingCenter Detector (MOC-detector), by treating an action instance as a trajectory of moving points. MOC-detector is decomposed into three crucial head branches:

  • (1) Center Branch for instance center detection and action recognition.
  • (2) Movement Branch for movement estimation at adjacent frames to form moving point trajectories.
  • (3) Box Branch for spatial extent detection by directly regressing bounding box size at the estimated center point of each frame.


MOC-Detector Usage

1. Installation

Please refer to Installation.md for installation instructions.


2. Dataset

Please refer to Dataset.md for dataset setup instructions.


3. Evaluation

You can follow the instructions in Evaluation.md to evaluate our model and reproduce the results in original paper.


4. Train

You can follow the instructions in Train.md to train our models.


5. Visualization

You can follow the instructions in Visualization.md to get visualization results.



References

Citation

If you find this code is useful in your research, please cite:

@InProceedings{li2020actions,
    title={Actions as Moving Points},
    author={Yixuan Li and Zixu Wang and Limin Wang and Gangshan Wu},
    booktitle={arXiv preprint arXiv:2001.04608},
    year={2020}
}
If there is any infringement, please contact {[email protected]}.