By Yihui He (Xi'an Jiaotong University), Xiangyu Zhang and Jian Sun (Megvii)
ICCV 2017
In this repository, we illustrate channel pruning VGG-16 4X with our 3C method. After finetuning, the Top-5 accuracy is 89.9% (suffers no performance degradation).
Structured simplification methods | Channel pruning (d) |
If you find the code useful in your research, please consider citing:
@article{he2017channel,
title={Channel Pruning for Accelerating Very Deep Neural Networks},
author={He, Yihui and Zhang, Xiangyu and Sun, Jian},
journal={arXiv preprint arXiv:1707.06168},
year={2017}
}
- Python3 packages you might not have:
scipy
,sklearn
,easydict
- For finetuning with 128 batch size, 4 GPUs (~11G of memory)
- Clone the repository
# Make sure to clone with --recursive git clone --recursive https://github.com/yihui-he/channel-pruning.git
- Build my Caffe fork
cd caffe # If you're experienced with Caffe and have all of the requirements installed, then simply do: make -j8 && make pycaffe # Or follow the Caffe installation instructions here: # http://caffe.berkeleyvision.org/installation.html
- Download ImageNet classification dataset
http://www.image-net.org/download-images
Specify imagenetsource
path intemp/vgg.prototxt
(line 12 and 36)
For fast testing, you can directly download pruned model. See next section
-
Download the original VGG-16 model http://www.robots.ox.ac.uk/~vgg/software/very_deep/caffe/VGG_ILSVRC_16_layers.caffemodel
move it totemp/vgg.caffemodel
(or create a softlink instead) -
Start Channel Pruning
python3 train.py -action c3 -caffe GPU0 # or log it with ./run.sh python3 train.py -action c3 -caffe [GPU0] # replace [GPU0] with actual GPU device like 0,1 or 2
-
Combine some factorized layers for further compression, and calculate the acceleration ratio
./combine.sh | xargs ./calflop.sh
-
Finetuning
./finetune.sh [GPU0,GPU1,GPU2,GPU3] # replace [GPU0,GPU1,GPU2,GPU3] with actual GPU device like 0,1,2,3
-
Testing Though testing is done while finetuning, you can test anytime with:
caffe test -model path/to/prototxt -weights path/to/caffemodel -iterations 5000 -gpu [GPU0] # replace [GPU0] with actual GPU device like 0,1 or 2
For fast testing, you can directly download pruned model from release: https://github.com/yihui-he/channel-pruning/releases/download/VGG-16_3C4x/channel_pruning_VGG-16_3C4x.zip
Test with:
caffe test -model channel_pruning_VGG-16_3C4x.prototxt -weights channel_pruning_VGG-16_3C4x.caffemodel -iterations 5000 -gpu [GPU0]
# replace [GPU0] with actual GPU device like 0,1 or 2