This is the PyTorch implementation for the paper:
Deformable Siamese Attention Networks for Visual Object Tracking;
Yuechen Yu, Yilei Xiong, Weilin Huang, Matthew R. Scott
In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
The full paper is available at: CVF and arXiv.
Our code is based on PySOT repository. You may check the original README.md of PySOT.
Please refer to INSTALL.md for installation.
Set enviroment variable PYTHONPATH
as following.
export PYTHONPATH=/path/to/siamattn:$PYTHONPATH
Download datasets and put them into testing_dataset
directory. Jsons of commonly used datasets can be downloaded from Google Drive or BaiduYun. If you want to test tracker on new dataset, please refer to pysot-toolkit.
Our models are provided on Google Drive. Download the models into the working directory experiments/siamattn
. The file structure is supposed to be as following.
experiments/siamattn/
├── checkpoint
│ ├── checkpoint_otb100.pth
│ └── checkpoint_vot2018.pth
└── config
├── config_otb100.yaml
└── config_vot2018.yaml
Change the working directory by following command.
cd /path/to/repo/experiments/siamattn
We assume we are in this working directory for testing and evaluating trackers as described below.
Test the model with the corresponding config file.
python -u ../../tools/test.py \
--snapshot checkpoint/checkpoint_vot2018.pth \ # model path
--dataset VOT2018 \ # dataset name
--config config/config_vot2018.yaml # config file
Testing results will be saved in results\$dataset\$checkout_name
directory.
Note: The results used in our paper can be downloaded from Google Drive.
Eval the model based on the results of testing.
python ../../tools/eval.py \
--tracker_path ./results \ # result path
--dataset VOT2018 \ # dataset name
--num 1 \ # number thread to eval
--tracker_prefix 'checkpoint' # tracker_name_prefix
Please cite our paper if this implementation helps your research. BibTeX reference is shown in the following.
@inproceedings{yu2020deformable,
title={Deformable Siamese Attention Networks for Visual Object Tracking},
author={Yu, Yuechen and Xiong, Yilei and Huang, Weilin and Scott, Matthew R},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={6728--6737},
year={2020}
}
For any questions, please feel free to reach:
SiamAttn is CC-BY-NC 4.0 licensed, as found in the LICENSE file. It is released for academic research / non-commercial use only. If you wish to use for commercial purposes, please contact [email protected].