MAL code for the paper Multiple Anchor Learning for Visual Object Detection pdf.
Get into MAL root folder.
- Create conda env by
conda env create -n MAL
and activate it by 'conda activate MAL'. - Install python libraries.
conda install ipython ninja yacs cython matplotlib tqdm
- Install pytorch 1.1 + torchvision 0.2.1 by pip.
download
whl
file at https://download.pytorch.org/whl/cu90/torch_stable.htmlpip install [downloaded file]
- Install pycocotools
pip install pycocotools
- Copy https://github.com/facebookresearch/maskrcnn-benchmark/tree/master/maskrcnn_benchmark to this repository.
- Build maskrcnn_benchmark by run
python setup.py build develop
- Install OpenCV3.
- Go to
./demo
- Run
python image_demo.py
. You can use your own image and change the image path in image_demo.py
Get into MAL root folder.
For test-dev set, run
python python -m torch.distributed.launch --nproc_per_node=8 tools/test_net.py --config-file ./config/MAL_X-101-FPN_e2e.yaml MODEL.WEIGHT ./output/models/model_0180000.pth DATASETS.TEST "('coco_test-dev',)"
For val set, run
python python -m torch.distributed.launch --nproc_per_node=8 tools/test_net.py --config-file ./config/MAL_X-101-FPN_e2e.yaml MODEL.WEIGHT ./output/models/model_0180000.pth
mAP = 47.0 on test-dev
ResNet50: https://share.weiyun.com/5kcZju5 ResNet101: https://share.weiyun.com/5gtr6Ho ResNext101: https://share.weiyun.com/oUZUWfSx