Skip to content

Ant0nnnnnnnny/fall-person-recognition

Repository files navigation

fall-person-recognition

English | 简体中文

NEWS

2023.9 —— Support RTMPose.

2023.8 —— Real-time Skeleton-Based Fall Detection is available now. F1 Score:94.3%.

2023.8 —— Skeleton-Based Fall Detection dataset is available, which is reprocessed from UR Fall Dataset, containing total 6872 samples.Google Drive

2023.8 —— Real-time multi-person skeleton-based action recognition is available. 31FPS on Apple M1Pro(CPU)。

2023.8 —— Real-time multi-person pose estimation is available. 33FPS on Apple M1Pro(CPU)。

2023.7 —— Skeleton-Based Action recognition dataset has been added to project, Which contains over 110000 samples. Google Drive

2023.7 —— Multi-pose estimation is available.

2023.2 —— Real-time single person pose estimation is available.

TODO

  • Retrained on COCO dataset.(47FPS).
  • Real-time human body detector based on PicoDet73FPS).
  • Pose tracking based on ByteTrack.
    • Bounding boxes tracking.
    • Skeletons tracking.
  • Action recognition model based on deep learning.
    • Dataset(NTU-120)。
    • experiment on related methods.
    • Constructing on light model.
    • Improve model performance.
  • Improve model performance.
    • Skeleton noise filter in video inference.(OneEuro filter)
    • Improve PicoDet.
    • Improve Pipeline.
    • Improve Tracker.
  • Release python deployed version.
  • Release C++ deployed version.
  • Computing parallel on servers.

Usage

python video.py --video_path YOUR_VIDEO_PATH [-- disable_filter] [--save_result] [--save_path result.mov] [--skeleton_visible] [--verbose]

Arguments:

video_path: REQUIRED. The target to process frame by frame.

disable_filter: Disable the filter used to smooth the skeleton.

save_result: Save the inference result. The save_path need to by specified if save_result was specified.

save_path: Specified path inference result saved.

skeleton_visible: Draw skeleton in frame real-time.

verbose: Print time details in inference.

Docs

Config→Config

Projects files info: Files

Download

Skeleton-Based Action Recognition Dataset

Link:Google Drive(No limit) Baidu Yun

Skeleton-Based Fall Detection Dataset

Link:Google Drive(No limit) Baidu Yun

Results

Skeleton-based action recognition model performances

Model Accuracy in experiment(%) Accuracy in paper(%) latency(ms) Params(M)
ST-GCN 85.8 88.8(3D) 79.2 ± 4.4 3.095
SGN 74.6 79.2(3D) 6.55 ± 0.47 0.721
SGN+(Ours) 84.3 - 12.4 ± 0.35 0.662
TODO STID - - - -

Skeleton-based Fall detection results

Gif1

Gif2

Gif3

SGN+

Figure

Model's performances in pipeline

Model FLOPS(G) Params(M) Latency(ms/frame)* Info
RTMPose 0.68 5.47 6.2/Person Pose estimation model.
PicoDet 1.18 0.97 13.7 Human detection model.
ByteTrack - - 7.3 Human tracking model.
SGN+ 2.73 0.662 12.4 Action recognition model.
Total 4.58 6.09 42.6

*Info: Evaluating on M1Pro(8+2)。

Result

Multi-person: Cover

足球

Detector: PicoDet Pose estimator: MFNet

Siting

Sitting Detector: PicoDet Pose estimator: MFNet

Standing

Standing Detector: PicoDet Pose estimator: MFNet

Wide

Wide Detector: PicoDet Pose estimator: MFNet

Plan

Stage 1

Build a real-time pose estimation model. Done

Target: AP: 0.80+,FPS: 18+ Result: AP: 0.93,FPS: 47+

Stage 2

Build a light skeleton-based fallen recognition model based on pose estimator in stage 1.

Target: Accuracy: 0.95+, CPU-REALTIME

Stage 3

Deploy models based on micro-python,Djangoand Flutter. micro-python is used to capture data on edge devices and upload to server.Django or any server framework is used to build backend,which is responsible for infer frames and send results to mobile devices.Flutter is used to build an app to receive results computed by server, such as whether the target is falling and so on。

Reference

About

A light network to recognize fallen person.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages