This repo contains required scripts to reproduce results from paper:
Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks
python 3.6+ and PyTorch 1.0+
- clone the code
- pip install --upgrade git+https://github.com/youzhonghui/pytorch-OpCounter.git
- pip install tqdm
In the run/resnet-56
folder, we provide an example which reduces the FLOPs of resnet-56 by 70%, but still maintains 93.15% accuracy on CIFAR-10:
- The
run/resnet-56/resnet56_prune.ipynb
prunes the network with Tick-Tock framework. - The
run/resnet-56/finetune.ipynb
shows how to finetune the pruned network to get better results.
If you want to run the demo code, you may need to install jupyter notebook
In the run/vgg16
folder, we provide an example executed by command line, which reduces the FLOPs of VGG-16 by 90% (98% parameters), and keep 92.07% accuracy on CIFAR-10.
The instructions can be found here
In the run/load_pruned_model/
folder, we provide an example shows how to save and load a pruned model (VGG-16 with only 0.3M float parameters).
- Basic running example.
- PyTorch 1.2 compatibility test.
- The command-line execution demo.
- Save and load the pruned model.
- ResNet-50 pruned model.
If you use this code for your research, please cite our paper:
@inproceedings{zhonghui2019gate,
title={Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks},
author={Zhonghui You and
Kun Yan and
Jinmian Ye and
Meng Ma and
Ping Wang},
booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
year={2019}
}