Our codebase is a bit messy. We'll clean up our codebase after NSDI fall deadline 😆
Here are the steps to run our code. We assume you start from a directory called $DIR
, and your working machine contains an NVIDIA GPU.
Please refer to INSTALL.md on how to setup the environment.
cd $DIR/AccMPEG
to enter the repo of AccMPEG, and run
python generate_mpeg_curve.py
This script will generates the data points for AWStream baseline.
(Note: this will take a while, please wait.)
Then run
python batch_blackgen_roi.py
to run AccMPEG.
Run
cd artifact/
to enter into the artifact folder, and then run
python plot.py
to plot the delay-accuracy trade-off. The results are shown in delay-accuracy.jpg. Here is the results (generated from the stats file we use for generating figures in our paper).
This plot replicates the experiments in Figure 7 (better accuracy-delay trade-off than the baselines under a wide range of video contents and video analytic models) on one video and one baseline (AWStream). We pick AWStream as the representative baseline since it is the baseline that is both applicable under various settings (DDS baseline and EAAR baseline are not applicable when the video analytic model has no region proposal module) and competitive (the accuracy of Vigil baseline is significantly lower than other approaches, and the accuracy-delay trade-off of Vigil is worse than AWStream).
Note that the exact number may vary. Here is one figure reproduced by us under different server/ffmpeg version/CUDA version/torch version/torchvision version.
We put all the videos we used for object detection into artifact
folder. To run these videos:
- Extract all the videos to pngs through the
extract.py
inside the folder - Edit
args.inputs
ingenerate_mpeg_curve.py
and run this script to generate AWStream baseline on these videos. - Edit
v_list
inbatch_blackgen_roi.py
and run the script to run AccMPEG.