Skip to content

Intel® Neural Compressor v1.9 Release

Compare
Choose a tag to compare
@ftian1 ftian1 released this 04 Jan 01:56
768c49e

Features

  • Knowledge distillation

    • Supported one-shot compression pipelines (knowledge distillation during quantization-aware training) on PyTorch
    • Added more distillation examples on TensorFlow and PyTorch
  • Quantization

    • Supported multi-objective tuning for quantization
    • Supported Intel Extension for PyTorch v1.10 version
    • Improved quantization-aware training support on PyTorch v1.10
  • Pruning

    • Added more magnitude pruning examples on TensorFlow
  • Reference bara-metal examples

    • Supported BF16 optimizations on NLP models
    • Added sparse DLRM model (experimental)
  • Productivity

    • Added Python favorable API (alternative to YAML configuration file)
    • Improved user facing APIs more pythonic
  • Ecosystem

    • Integrated pruning API into HuggingFace Optimum
    • Added ssd-mobilenetv1, efficientnet, ssd, fcn_rn50, inception_v1 quantized models to ONNX Model Zoo

Validated Configurations

  • Python 3.7 & 3.8 & 3.9
  • Centos 8.3 & Ubuntu 18.04
  • TensorFlow 2.6.2 & 2.7
  • Intel TensorFlow 2.4.0, 2.5.0 and 1.15.0 UP3
  • PyTorch 1.8.0+cpu, 1.9.0+cpu, IPEX 1.8.0
  • MxNet 1.6.0, 1.7.0, 1.8.0
  • ONNX Runtime 1.6.0, 1.7.0, 1.8.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/neural-compressor.git $ git clone https://github.com/intel/neural-compressor.git
Binary Pip https://pypi.org/project/neural-compressor $ pip install neural-compressor
Binary Conda https://anaconda.org/intel/neural-compressor $ conda install neural-compressor -c conda-forge -c intel

Contact:

Please feel free to contact [email protected], if you get any questions.