Intel® Low Precision Optimization Tool v1.5 Release
Intel® Low Precision Optimization Tool v1.5 release is featured by:
- Add pattern-lock sparsity algorithm for NLP fine-tuning tasks
- Up to 70% unstructured sparsity and 50% structured sparsity with <2% accuracy loss on 5 Bert finetuning tasks
- Add NLP head pruning algorithm for HuggingFace models
- Performance speedup up to 3.0X within 1.5% accuracy loss on HuggingFace BERT SST-2
- Support model optimization pipeline
- Integrate SigOPT with multi-metrics optimization
- Complementary as basic strategy to speed up the tuning
- Support TensorFlow 2.5, PyTorch 1.8, and ONNX Runtime 1.8
Validated Configurations:
- Python 3.6 & 3.7 & 3.8 & 3.9
- Centos 8.3 & Ubuntu 18.04
- Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0, 2.5.0 and 1.15.0 UP1 & UP2 & UP3
- PyTorch 1.5.0+cpu, 1.6.0+cpu, 1.8.0+cpu, ipex
- MxNet 1.6.0, 1.7.0
- ONNX Runtime 1.6.0, 1.7.0, 1.8.0
Distribution:
Channel | Links | Install Command | |
---|---|---|---|
Source | Github | https://github.com/intel/lpot.git | $ git clone https://github.com/intel/lpot.git |
Binary | Pip | https://pypi.org/project/lpot | $ pip install lpot |
Binary | Conda | https://anaconda.org/intel/lpot | $ conda install lpot -c conda-forge -c intel |
Contact:
Please feel free to contact [email protected], if you get any questions.