Still Under Construction.
- kmeans quantization
- finetuning the quantized model for cifar
- freeze the weights and training the scale factor after quantization
- finetuning the quantized model for imagenet
- expansion and compression of the weights
- depth-wise convolution quantization
python main_imgnet_training.py ~/workspace/dataset/torch_imagenet/CLS-LOC/ -a resnet --pretrained --resume ./R-50-GN-WS.pth.tar -e --dist-url 'tcp://127.0.0.1:8888' --dist-backend 'nccl' --multiprocessing-distributed --world-size 1 --rank 0、
python train_cifar.py -a alexnet --epochs 164 --schedule 81 122 --gamma 0.1 --lr 0.01 --a_quant --a_bit 8 --checkpoint checkpoints/cifar10/alexnet_act8
python train_cifar.py -a alexnet --epochs 164 --schedule 81 122 --gamma 0.1 --lr 0.01 --checkpoint checkpoints/cifar10/alexnet_dorefa --resume checkpoints/cifar10/alexnet/model_best.pth.tar
python save_WN_as_norm.py
kmeans quantization:
python weightquant.py --input_ckpt ./R-50-GN-WS-flod.pth.tar --quant_method kmeans --output_ckpt ./R-50-GN-WS-q --weight_bits 4 --n_sample 10000
linear quantization:
python weightquant.py --input_ckpt ./R-50-GN-WS-flod.pth.tar --quant_method linear --output_ckpt ./R-50-GN-WS-q --weight_bits 4
* AlexNet
* VGG (Imported from pytorch-cifar)
* ResNet
* ResNeXt (Imported from ResNeXt.pytorch)
* Wide Residual Networks (Imported from WideResNet-pytorch)
* DenseNet