Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revamp demos and dockerize them for reproducibility #146

Merged
merged 45 commits into from
Aug 31, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
45 commits
Select commit Hold shift + click to select a range
37be310
Start dockerifying demos
Aug 16, 2022
0e20334
Remove unnecessary dependencies
Aug 17, 2022
d15ccfc
Clone norfair's repo in the profiling demo
Aug 17, 2022
7241415
Dockerify openpose and minor change in detectron2
Aug 17, 2022
1789157
Fix openpose Readme
Aug 17, 2022
32cd8e7
Change container tag in openpose
Aug 17, 2022
7e486a7
Minor change in Dockerfile
Aug 17, 2022
3e779ed
Dockerize alphapose
Aug 17, 2022
bfbd113
Dockerify yolov5 demo
Aug 17, 2022
ef2c934
Dockerify yolov4
Aug 17, 2022
78a8d30
Dockerify mmdetection
Aug 17, 2022
29bede0
Dockerify keypoints-bounding-boxes demo
Aug 17, 2022
5351d73
Dockerify motmetrics4norfair
Aug 17, 2022
0c74af2
Better docker practices
Aug 18, 2022
89bec5b
Change argument syntaxis
Aug 19, 2022
c199fba
Use torch hub to load yolov5 models
Aug 19, 2022
9c8de46
Split Dockerfiles into logical units
Aug 19, 2022
ff48fbd
Change thres and thresh arguments to threshold
Aug 22, 2022
e75c8cb
Use last alphapose version
Aug 24, 2022
0aff87b
Change YOLOv5 demo.
dekked Aug 23, 2022
8a05532
Change YOLOv4 demo.
dekked Aug 23, 2022
6804fe2
Change MOT Metrics demo.
dekked Aug 23, 2022
35c4a22
Change mmdetection demo.
dekked Aug 23, 2022
f681b25
Update Detectron2 demo.
dekked Aug 26, 2022
e961ffc
Update keypoints_bounding_boxes demo.
dekked Aug 26, 2022
879038c
Missing files and README updates.
dekked Aug 26, 2022
fdb1563
Update AlphaPose demo.
dekked Aug 28, 2022
3ec64cf
Update scripts.
dekked Aug 29, 2022
d85a760
Make run_gpu.sh executable
Aug 29, 2022
d9f093a
Set /demo/src as initial directory
Aug 29, 2022
e5641ff
Remove python path '/norfair/'
Aug 29, 2022
a05cde6
Remove /norfair/ python path
Aug 29, 2022
62b5f25
Update profiling demo
Aug 29, 2022
a59d3bc
Set /demo/src as initial directory
Aug 29, 2022
aba10e4
Update YOLOv7 demo.
dekked Aug 29, 2022
7dce26d
Improve Dockerfiles and demo instructions.
dekked Aug 30, 2022
ac95978
Detectron2 demo should work now.
dekked Aug 30, 2022
188043c
Update main README.
dekked Aug 30, 2022
4116ced
Fix and improve MOT metrics.
dekked Aug 30, 2022
fb6323d
Update profiling demo.
dekked Aug 30, 2022
4cf6033
Moved Motivation down in README.
dekked Aug 30, 2022
6701829
Update ReID demo.
dekked Aug 30, 2022
c12dbbe
ReID demo on main readme.
dekked Aug 30, 2022
0b2a5df
Add note in readme.
dekked Aug 31, 2022
c692e9e
Add missing flags to Docker.
dekked Aug 31, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
108 changes: 62 additions & 46 deletions README.md

Large diffs are not rendered by default.

49 changes: 49 additions & 0 deletions demos/alphapose/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch/tags
FROM nvcr.io/nvidia/pytorch:22.08-py3

WORKDIR /root

ENV DEBIAN_FRONTEND=noninteractive
ENV LANG=C.UTF-8

RUN apt-get update && \
apt-get install -y \
sudo libyaml-dev git gcc build-essential wget unzip \
libosmesa6-dev libgl1-mesa-dev libglu1-mesa-dev \
locales && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*

# install alphapose
# https://pythontechworld.com/issue/mvig-sjtu/alphapose/1067
# https://github.com/MVIG-SJTU/AlphaPose/issues/1072
# https://github.com/MVIG-SJTU/AlphaPose/issues/572
RUN git clone https://github.com/MVIG-SJTU/AlphaPose.git ./AlphaPose/ && \
pip3 install cython && \
ln -s /usr/bin/ninja /usr/bin/ninja-build && \
wget https://github.com/ninja-build/ninja/releases/download/v1.8.2/ninja-linux.zip && \
sudo unzip ninja-linux.zip -d /usr/local/bin/ && \
sudo update-alternatives --install /usr/bin/ninja ninja /usr/local/bin/ninja 1 --force && \
git clone https://gitlab.freedesktop.org/mesa/glu ./glu/ && \
pip3 install meson && \
cd ./glu/ && mkdir build && cd build && meson --prefix=$XORG_PREFIX -Dgl_provider=gl --buildtype=release --prefix '/' .. && ninja && sudo ninja install && rm -vf /usr/lib/libGLU.a

COPY setup.py /root/AlphaPose/setup.py

# Our own flavor of DataWriter, which includes Norfair tracking
COPY writer.py /root/AlphaPose/alphapose/utils/writer.py

RUN cd /root/AlphaPose/ && python3 setup.py build develop --user

RUN pip3 install gdown==4.4.0 && \
mkdir /root/AlphaPose/detector/yolo/data/ && \
gdown --id 1D47msNOOiJKvPOXlnpyzdKA3k6E97NTC -O /root/AlphaPose/detector/yolo/data/yolov3-spp.weights && \
gdown --id 1nlnuYfGNuHWZztQHXwVZSL_FvfE551pA -O /root/AlphaPose/pretrained_models/jde.1088x608.uncertainty.pt && \
gdown --id 1kQhnMRURFiy7NsdS8EFL-8vtqEXOgECn -O /root/AlphaPose/pretrained_models/fast_res50_256x192.pth

# if you want to use yolox-x as the detector
# RUN wget -P /root/AlphaPose/detector/yolox/data/ https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.0/yolox_x.pth

RUN pip install git+https://github.com/tryolabs/norfair.git@master#egg=norfair

WORKDIR /root/AlphaPose/
73 changes: 12 additions & 61 deletions demos/alphapose/README.md
Original file line number Diff line number Diff line change
@@ -1,78 +1,29 @@
# Tracking pedestrians with AlphaPose

An example of how to integrate Norfair into the video inference loop of a pre existing repository. This example uses Norfair to try out custom trackers on [AlphaPose](https://github.com/MVIG-SJTU/AlphaPose).
An example of how to integrate Norfair into the video inference loop of a pre-existing solution. This example uses Norfair to try out custom trackers on [AlphaPose](https://github.com/MVIG-SJTU/AlphaPose), which has a non-trivial inference loop.

## Instructions

1. Install Norfair with `pip install norfair[video]`.
2. [Follow the instructions](https://github.com/MVIG-SJTU/AlphaPose/tree/pytorch#installation) to install the Pytorch version of AlphaPose.
3. Apply this diff to this [commit](https://github.com/MVIG-SJTU/AlphaPose/commit/ded84d450faf56227680f0527ff7e24ab7268754) on AlphaPose and use their [video_demo.py](https://github.com/MVIG-SJTU/AlphaPose/blob/ded84d450faf56227680f0527ff7e24ab7268754/video_demo.py) to process your video.
1. Build and run the Docker container with `./run_gpu.sh`.

```diff
diff --git a/dataloader.py b/dataloader.py
index ed6ee90..a7dedb0 100644
--- a/dataloader.py
+++ b/dataloader.py
@@ -17,6 +17,8 @@ import cv2
import json
import numpy as np
+import norfair
import time
import torch.multiprocessing as mp
from multiprocessing import Process
@@ -606,6 +608,17 @@ class WebcamLoader:
# indicate that the thread should be stopped
self.stopped = True
2. In the container, display the demo instructions:

+detection_threshold = 0.2
+keypoint_dist_threshold = None
+def keypoints_distance(detected_pose, tracked_pose):
+ distances = np.linalg.norm(detected_pose.points - tracked_pose.estimate, axis=1)
+ match_num = np.count_nonzero(
+ (distances < keypoint_dist_threshold)
+ * (detected_pose.scores > detection_threshold)
+ * (tracked_pose.last_detection.scores > detection_threshold)
+ )
+ return 1 / (1 + match_num)
+
class DataWriter:
def __init__(self, save_video=False,
savepath='demos/res/1.avi', fourcc=cv2.VideoWriter_fourcc(*'XVID'), fps=25, frameSize=(640,480),
@@ -624,6 +637,11 @@ class DataWriter:
if opt.save_img:
if not os.path.exists(opt.outputpath + '/vis'):
os.mkdir(opt.outputpath + '/vis')
+ self.tracker = norfair.Tracker(
+ distance_function=keypoints_distance,
+ distance_threshold=0.3,
+ detection_threshold=0.2
+ )
```bash
python3 scripts/demo_inference.py --help
```

In the container, use the `/demo` folder as a volume to share files with the container.

def start(self):
# start a thread to read frames from the file video stream
@@ -672,7 +690,15 @@ class DataWriter:
}
self.final_result.append(result)
if opt.save_img or opt.save_video or opt.vis:
- img = vis_frame(orig_img, result)
+ img = orig_img.copy()
+ global keypoint_dist_threshold
+ keypoint_dist_threshold = img.shape[0] / 30
+ detections = [
+ norfair.Detection(p['keypoints'].numpy(), scores=p['kp_score'].squeeze().numpy())
+ for p in result['result']
+ ]
+ tracked_objects = self.tracker.update(detections=detections)
+ norfair.draw_tracked_objects(img, tracked_objects)
if opt.vis:
cv2.imshow("AlphaPose Demo", img)
cv2.waitKey(30)
```bash
python3 scripts/demo_inference.py --detector yolo --cfg configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --video /demo/video.mp4 --save_video --outdir /demo/
```

## Explanation

With Norfair, you can try out your own custom tracker on the very accurate poses produced by AlphaPose by just integrating it into AlphaPose itself, and therefore avoiding the difficult job of decoupling the model from the code base.

This example modifies AlphaPose's original `writer.py` file, integrating a few lines that add Norfair tracking over the existing codebase.

This produces the following results:

![Norfair AlphaPose demo](../../docs/alphapose.gif)
8 changes: 8 additions & 0 deletions demos/alphapose/run_gpu.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
#!/usr/bin/env -S bash -e
docker build . -t norfair-alphapose
docker run -it --rm \
--gpus all \
--shm-size=5gb \
-v `realpath .`:/demo \
norfair-alphapose \
bash
223 changes: 223 additions & 0 deletions demos/alphapose/setup.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,223 @@
import os
import platform
import subprocess
import time

import numpy as np
from Cython.Build import cythonize
from setuptools import Extension, find_packages, setup
from torch.utils.cpp_extension import BuildExtension, CUDAExtension

MAJOR = 0
MINOR = 5
PATCH = 0
SUFFIX = ''
SHORT_VERSION = '{}.{}.{}{}'.format(MAJOR, MINOR, PATCH, SUFFIX)

version_file = 'alphapose/version.py'


def readme():
with open('README.md') as f:
content = f.read()
return content


def get_git_hash():

def _minimal_ext_cmd(cmd):
# construct minimal environment
env = {}
for k in ['SYSTEMROOT', 'PATH', 'HOME']:
v = os.environ.get(k)
if v is not None:
env[k] = v
# LANGUAGE is used on win32
env['LANGUAGE'] = 'C'
env['LANG'] = 'C'
env['LC_ALL'] = 'C'
out = subprocess.Popen(
cmd, stdout=subprocess.PIPE, env=env).communicate()[0]
return out

try:
out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD'])
sha = out.strip().decode('ascii')
except OSError:
sha = 'unknown'

return sha


def get_hash():
if os.path.exists('.git'):
sha = get_git_hash()[:7]
elif os.path.exists(version_file):
try:
from alphapose.version import __version__
sha = __version__.split('+')[-1]
except ImportError:
raise ImportError('Unable to get git version')
else:
sha = 'unknown'

return sha


def write_version_py():
content = """# GENERATED VERSION FILE
# TIME: {}

__version__ = '{}'
short_version = '{}'
"""
sha = get_hash()
VERSION = SHORT_VERSION + '+' + sha

with open(version_file, 'w') as f:
f.write(content.format(time.asctime(), VERSION, SHORT_VERSION))


def get_version():
with open(version_file, 'r') as f:
exec(compile(f.read(), version_file, 'exec'))
return locals()['__version__']


def make_cython_ext(name, module, sources):
extra_compile_args = None
if platform.system() != 'Windows':
extra_compile_args = {
'cxx': ['-Wno-unused-function', '-Wno-write-strings']
}

extension = Extension(
'{}.{}'.format(module, name),
[os.path.join(*module.split('.'), p) for p in sources],
include_dirs=[np.get_include()],
language='c++',
extra_compile_args=extra_compile_args)
extension, = cythonize(extension)
return extension


def make_cuda_ext(name, module, sources):

return CUDAExtension(
name='{}.{}'.format(module, name),
sources=[os.path.join(*module.split('.'), p) for p in sources],
extra_compile_args={
'cxx': [],
'nvcc': [
'-D__CUDA_NO_HALF_OPERATORS__',
'-D__CUDA_NO_HALF_CONVERSIONS__',
'-D__CUDA_NO_HALF2_OPERATORS__',
]
})


def get_ext_modules():
ext_modules = []
# only windows visual studio 2013+ support compile c/cuda extensions
# If you force to compile extension on Windows and ensure appropriate visual studio
# is intalled, you can try to use these ext_modules.
force_compile = False
if platform.system() != 'Windows' or force_compile:
ext_modules = [
make_cython_ext(
name='soft_nms_cpu',
module='detector.nms',
sources=['src/soft_nms_cpu.pyx']),
make_cuda_ext(
name='nms_cpu',
module='detector.nms',
sources=['src/nms_cpu.cpp']),
make_cuda_ext(
name='nms_cuda',
module='detector.nms',
sources=['src/nms_cuda.cpp', 'src/nms_kernel.cu']),
make_cuda_ext(
name='roi_align_cuda',
module='alphapose.utils.roi_align',
sources=['src/roi_align_cuda.cpp', 'src/roi_align_kernel.cu']),
make_cuda_ext(
name='deform_conv_cuda',
module='alphapose.models.layers.dcn',
sources=[
'src/deform_conv_cuda.cpp',
'src/deform_conv_cuda_kernel.cu'
]),
make_cuda_ext(
name='deform_pool_cuda',
module='alphapose.models.layers.dcn',
sources=[
'src/deform_pool_cuda.cpp',
'src/deform_pool_cuda_kernel.cu'
]),
]
return ext_modules


def get_install_requires():
install_requires = [
'six', 'terminaltables', 'scipy',
'opencv-python', 'matplotlib', 'visdom',
'tqdm', 'tensorboardx', 'easydict',
'pyyaml',
'torch>=1.1.0', 'torchvision>=0.3.0',
'munkres', 'timm==0.1.20', 'natsort',
'opendr'
]
# official pycocotools doesn't support Windows, we will install it by third-party git repository later
#if platform.system() != 'Windows':
# install_requires.append('pycocotools')
return install_requires


def is_installed(package_name):
from pip._internal.utils.misc import get_installed_distributions
for p in get_installed_distributions():
if package_name in p.egg_name():
return True
return False


if __name__ == '__main__':
write_version_py()
setup(
name='alphapose',
version=get_version(),
description='Code for AlphaPose',
long_description=readme(),
keywords='computer vision, human pose estimation',
url='https://github.com/MVIG-SJTU/AlphaPose',
packages=find_packages(exclude=('data', 'exp',)),
package_data={'': ['*.json', '*.txt']},
classifiers=[
'Development Status :: 4 - Beta',
'License :: OSI Approved :: Apache Software License',
'Operating System :: OS Independent',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
],
license='GPLv3',
python_requires=">=3",
setup_requires=['pytest-runner', 'numpy', 'cython'],
tests_require=['pytest'],
install_requires=get_install_requires(),
ext_modules=get_ext_modules(),
cmdclass={'build_ext': BuildExtension},
zip_safe=False)
# Windows need pycocotools here: https://github.com/philferriere/cocoapi#subdirectory=PythonAPI
if platform.system() == 'Windows' and not is_installed('pycocotools'):
print("\nInstall third-party pycocotools for Windows...")
cmd = 'python -m pip install git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI'
os.system(cmd)
if not is_installed('cython_bbox'):
print("\nInstall `cython_bbox`...")
cmd = 'python -m pip install git+https://github.com/yanfengliu/cython_bbox.git'
os.system(cmd)
Loading