Making Anomalies More Anomalous: Video Anomaly Detection Using a Novel Generator and Destroyer (IEEE Access)
This repository is the official open-source
of Making Anomalies More Anomalous: Video Anomaly Detection Using a Novel Generator and Destroyer
by Seungkyun Hong*, Sunghyun Ahn*, Youngwan Jo and Sanghyun Park. (*equally contributed)
- [2024/02/26] MAMA codes & weights are released!
- [2024/02/26] Our MAMA paper has been published in IEEE Access!
This new Generator excels at predicting normal frame but struggles with abnormal one. It includes a module to transform frame feature to label and motion feature in bottleneck areas, reducing its ability to generate abnormal frame. If a frame with abnormal objects or behavior is inputted, the transformation from frame features to other features becomes challenging, incurring a penalty in predicting future frames.
It's a Destroyer that takes generated future frame as input, detects low-quality regions, and destroyes them. This enhances the abnormality in the output. We trained the Destroyer using self-supervised learning because the training data doesn't include abnormal frames.
It's a two-stage video anomaly detection method based on unsupervised learning for the F2LM Generator and self-supervised learning for the Destroyer. Both models are individually optimized.
AUC and EER comparison with the state-of-the-art methods on UCSD Ped2
, CUHK Avenue
, and Shanghai Tech.
Best results are bolded. Best seconds are underlined. We compared our model with prominent papers published from 2018 to 2023.
The Destroyer model enhances abnormality by destroying abnormal areas, resulting in a larger gap in Anomaly Scores between normal and abnormal data and an increased AUC.
- python >= 3.8.
- torch = 1.11.0+cu113
- torchvision = 0.12.0+cu113
- scikit-learn = 1.0.2.
- tensorboardX
- opencv-python
- matplotlib
- einops
- timm
- scipy
- Other common packages.
- You can specify the dataset's path by editing
'data_root'
inconfig.py
.
UCSD Ped2 | CUHK Avenue | Shnaghai Tech. |
---|---|---|
Google Drive | Github Page | Github Page |
- Navigate to the
F2LM_Generator
directory and enter the following command. - When training starts, the
tensorboard_log
,weights
, andresults
folders are automatically created. - All saved models are located in the
weights
folder. - You can input
dataset_name
as one of the following choices: ped2, avenue, shanghai.
# default option for generator training.
python train.py --dataset={dataset_name}
# change 'seed'.
python train.py --dataset={dataset_name} --manualseed=50
# change 'max iteration'.
python train.py --dataset={dataset_name} --iters=60000
# change 'model save interval'.
python train.py --dataset={dataset_name} --save_interval=10000
# change 'validation interval'.
python train.py --dataset={dataset_name} --val_interval=1000
# Continue training with latest model
python train.py --dataset={dataset_name} --resume=latest_{dataset_name}
- Navigate to the
Destroyer
directory and enter the following command. - Before training, save the pretrained Generator weights in the
Destroyer/weights
directory. - When training the Destroyer, we set
iters
to 15,000 andval_interval
to 100.
# destroyer training with pre-trained generator model.
python train.py --dataset={dataset_name} --resume=g_best_auc_{dataset_name} --iters=15000 --val_interval=100
- Tensorboard visualization
# check losses and psnr while training.
tensorboard --logdir=tensorboard_log/{dataset_name}_bs{batch_size}
g_best_auc_{dataset_name}.pth
contains only generator weights.
# recommended code for generator evaluation.
python eval.py --dataset={dataset_name} --trained_model=g_best_auc_{dataset_name} --show_status=True
a_best_auc_{dataset_name}.pth
contains both generator weights and destroyer weights.
# recommended code for destroyer evaluation.
python eval.py --dataset={dataset_name} --trained_model=a_best_auc_{dataset_name} --show_status=True
- Refer to the PyTorch tutorial and pre-download the
deeplabv3_resnet101
model to your environment. - Download the FlowNetv2 weight and put it under the
F2LM_Generator/pretrained_flownet
andDestroyer/pretrained_flownet
folders. - Create a
weights
folder and put the pre-trained model weights in that folder. - If the specified conditions in the
Dependencies
are different, the AUC may different.
DeepLabv3 | FlowNetv2 | Ours |
---|---|---|
PyTorch Tutorial | Google Drive | Google Drive |
We observed a 1.6% performance improvement on the UCSD Ped2
dataset by applying a Gaussian 1D filter to the anomaly score in our model. However, we chose not to conduct performance comparisons in the paper to maintain fairness.
- You can evaluate performance by using the following command.
# recommended code for destroyer evaluation with 1-D gaussian filter.
python eval.py --dataset={dataset_name} --trained_model=a_best_auc_{dataset_name} --gaussian=True --show_status=True
If you use our work, please consider citing:
@article{Hong2024MakingAM,
title={Making Anomalies More Anomalous: Video Anomaly Detection Using a Novel Generator and Destroyer},
author={Seungkyun Hong and Sunghyun Ahn and Youngwan Jo and Sanghyun Park},
journal={IEEE Access},
year={2024},
volume={12},
pages={36712-36726},
}
Should you have any question, please create an issue on this repository or contact me at [email protected].