English | 简体中文
2023.9 —— Support RTMPose.
2023.8 —— Real-time Skeleton-Based Fall Detection is available now. F1 Score:94.3%.
2023.8 —— Skeleton-Based Fall Detection dataset is available, which is reprocessed from UR Fall Dataset, containing total 6872 samples.Google Drive
2023.8 —— Real-time multi-person skeleton-based action recognition is available. 31FPS on Apple M1Pro(CPU)。
2023.8 —— Real-time multi-person pose estimation is available. 33FPS on Apple M1Pro(CPU)。
2023.7 —— Skeleton-Based Action recognition dataset has been added to project, Which contains over 110000 samples. Google Drive
2023.7 —— Multi-pose estimation is available.
2023.2 —— Real-time single person pose estimation is available.
- Retrained on COCO dataset.(47FPS).
- Real-time human body detector based on PicoDet(73FPS).
- Pose tracking based on ByteTrack.
- Bounding boxes tracking.
- Skeletons tracking.
- Action recognition model based on deep learning.
- Dataset(NTU-120)。
- experiment on related methods.
- Constructing on light model.
- Improve model performance.
- Improve model performance.
- Skeleton noise filter in video inference.(OneEuro filter)
- Improve PicoDet.
- Improve Pipeline.
- Improve Tracker.
- Release python deployed version.
- Release C++ deployed version.
- Computing parallel on servers.
python video.py --video_path YOUR_VIDEO_PATH [-- disable_filter] [--save_result] [--save_path result.mov] [--skeleton_visible] [--verbose]
Arguments:
video_path
: REQUIRED. The target to process frame by frame.
disable_filter
: Disable the filter used to smooth the skeleton.
save_result
: Save the inference result. The save_path
need to by specified if save_result
was specified.
save_path
: Specified path inference result saved.
skeleton_visible
: Draw skeleton in frame real-time.
verbose
: Print time details in inference.
Config→Config
Projects files info: Files
Link:Google Drive(No limit) Baidu Yun
Link:Google Drive(No limit) Baidu Yun
Model | Accuracy in experiment(%) | Accuracy in paper(%) | latency(ms) | Params(M) |
---|---|---|---|---|
ST-GCN | 85.8 | 88.8(3D) | 79.2 ± 4.4 | 3.095 |
SGN | 74.6 | 79.2(3D) | 6.55 ± 0.47 | 0.721 |
SGN+(Ours) | 84.3 | - | 12.4 ± 0.35 | 0.662 |
TODO STID | - | - | - | - |
Model | FLOPS(G) | Params(M) | Latency(ms/frame)* |
Info |
---|---|---|---|---|
RTMPose | 0.68 | 5.47 | 6.2/Person | Pose estimation model. |
PicoDet | 1.18 | 0.97 | 13.7 | Human detection model. |
ByteTrack | - | - | 7.3 | Human tracking model. |
SGN+ | 2.73 | 0.662 | 12.4 | Action recognition model. |
Total | 4.58 | 6.09 | 42.6 |
*
Info: Evaluating on M1Pro(8+2)。
Detector: PicoDet Pose estimator: MFNet
Detector: PicoDet Pose estimator: MFNet
Detector: PicoDet Pose estimator: MFNet
Detector: PicoDet Pose estimator: MFNet
Build a real-time pose estimation model. Done
Target: AP: 0.80+,FPS: 18+ Result: AP: 0.93,FPS: 47+
Build a light skeleton-based fallen recognition model based on pose estimator in stage 1.
Target: Accuracy: 0.95+, CPU-REALTIME
Deploy models based on micro-python
,Django
and Flutter
. micro-python
is used to capture data on edge devices and upload to server.Django
or any server framework is used to build backend,which is responsible for infer frames and send results to mobile devices.Flutter
is used to build an app to receive results computed by server, such as whether the target is falling and so on。