Our Services – All Makes & Models. We are one of the few remaining full-service automotive repair facilities in the area. Since 1981, THE AUTO SHOP has continually grown and improved to better serve our Clients.Our desire is to cultivate a continued relationship with you, your family, and your friends, meeting all your vehicles needs.
This repo contains the PyTorch implementation for paper AMC: AutoML for Model Compression and Acceleration on Mobile Devices.
Voice Changer
Reference
If you find the repo useful, please kindly cite our paper:
Other papers related to automated model design:
HAQ: Hardware-Aware Automated Quantization with Mixed Precision (CVPR 2019)
ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware (ICLR 2019)
Training AMC
Current code base supports the automated pruning of MobileNet on ImageNet. The pruning of MobileNet consists of 3 steps: 1. strategy search; 2. export the pruned weights; 3. fine-tune from pruned weights.
To conduct the full pruning procedure, follow the instructions below (results might vary a little from the paper due to different random seed):
Strategy Search
To search the strategy on MobileNet ImageNet model, first get the pretrained MobileNet checkpoint on ImageNet by running:
It will also download our 50% FLOPs compressed model. Then run the following script to search under 50% FLOPs constraint:
Results may differ due to different random seed. The strategy we found and reported in the paper is:
Export the Pruned Weights
After searching, we need to export the pruned weights by running:
Also we need to modify MobileNet file to support the new pruned model (here it is already done in
models/mobilenet.py
)Fine-tune from Pruned Weightsa
After exporting, we need to fine-tune from the pruned weights. For example, we can fine-tune using cosine learning rate for 150 epochs by running:
AMC Compressed Model
We also provide the models and weights compressed by our AMC method. We provide compressed MobileNet-V1 and MobileNet-V2 in both PyTorch and TensorFlow format here.
Detailed statistics are as follows:
Models | Top1 Acc (%) | Top5 Acc (%) |
---|---|---|
MobileNetV1-width*0.75 | 68.4 | 88.2 |
MobileNetV1-50%FLOPs | 70.494 | 89.306 |
MobileNetV1-50%Time | 70.200 | 89.430 |
MobileNetV2-width*0.75 | 69.8 | 89.6 |
MobileNetV2-70%FLOPs | 70.854 | 89.914 |
Dependencies
Current code base is tested under following environment:
- Python 3.7.3
- PyTorch 1.1.0
- torchvision 0.2.1
- NumPy 1.14.3
- SciPy 1.1.0
- scikit-learn 0.19.1
- ImageNet dataset
Contact
To contact the authors:
Mit Auto Tune Models 2017
Ji Lin, jilin@mit.edu
Song Han, songhan@mit.edu