π€ Optimum is an extension of π€ Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on targeted hardware.
The AI ecosystem evolves quickly and more and more specialized hardware along with their own optimizations are emerging every day. As such, Optimum enables users to efficiently use any of these platforms with the same ease inherent to transformers.
π€ Optimum aims at providing more diversity towards the kind of hardware users can target to train and finetune their models.
To achieve this, we are collaborating with the following hardware manufacturers in order to provide the best transformers integration:
- Graphcore IPUs - IPUs are a completely new kind of massively parallel processor to accelerate machine intelligence. More information here.
- Habana Gaudi Processor (HPU) - HPUs are designed to maximize training throughput and efficiency. More information here.
- Intel - Enabling the usage of Intel tools to accelerate end-to-end pipelines on Intel architectures. More information here.
- More to come soon! β
Along with supporting dedicated AI hardware for training, Optimum also provides inference optimizations towards various frameworks and platforms.
Optimum enables the usage of popular compression techniques such as quantization and pruning by supporting ONNX Runtime along with Intel Neural Compressor (INC).
Features | ONNX Runtime | Intel Neural Compressor |
---|---|---|
Post-training Dynamic Quantization | βοΈ | βοΈ |
Post-training Static Quantization | βοΈ | βοΈ |
Quantization Aware Training (QAT) | Stay tuned! β | βοΈ |
Pruning | N/A | βοΈ |
π€ Optimum can be installed using pip
as follows:
python -m pip install optimum
If you'd like to use the accelerator-specific features of π€ Optimum, you can install the required dependencies according to the table below:
Accelerator | Installation |
---|---|
ONNX Runtime | python -m pip install optimum[onnxruntime] |
Intel Neural Compressor (INC) | python -m pip install optimum[intel] |
Graphcore IPU | python -m pip install optimum[graphcore] |
Habana Gaudi Processor (HPU) | python -m pip install optimum[habana] |
If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you can install the base library from source as follows:
python -m pip install git+https://github.com/huggingface/optimum.git
For the accelerator-specific features, you can install them by appending #egg=optimum[accelerator_type]
to the pip
command, e.g.
python -m pip install git+https://github.com/huggingface/optimum.git#egg=optimum[onnxruntime]
At its core, π€ Optimum uses configuration objects to define parameters for optimization on different accelerators. These objects are then used to instantiate dedicated optimizers, quantizers, and pruners.
Before applying quantization or optimization, we first need to export our model to the ONNX format.
import os
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer
model_checkpoint = "distilbert-base-uncased-finetuned-sst-2-english"
save_directory = "tmp/onnx/"
file_name = "model.onnx"
onnx_path = os.path.join(save_directory, "model.onnx")
# Load a model from transformers and export it through the ONNX format
model = ORTModelForSequenceClassification.from_pretrained(model_checkpoint, from_transformers=True)
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
# Save the onnx model and tokenizer
model.save_pretrained(save_directory, file_name=file_name)
tokenizer.save_pretrained(save_directory)
Let's see now how we can apply dynamic quantization with ONNX Runtime:
from optimum.onnxruntime.configuration import AutoQuantizationConfig
from optimum.onnxruntime import ORTQuantizer
# Define the quantization methodology
qconfig = AutoQuantizationConfig.arm64(is_static=False, per_channel=False)
quantizer = ORTQuantizer.from_pretrained(model_checkpoint, feature="sequence-classification")
# Apply dynamic quantization on the model
quantizer.export(
onnx_model_path=onnx_path,
onnx_quantized_model_output_path=os.path.join(save_directory, "model-quantized.onnx"),
quantization_config=qconfig,
)
In this example, we've quantized a model from the Hugging Face Hub, but it could also be a path to a local model directory. The feature
argument in the from_pretrained()
method corresponds to the type of task that we wish to quantize the model for. The result from applying the export()
method is a model-quantized.onnx
file that can be used to run inference.
Here's an example of how to load an ONNX Runtime model and generate predictions with it:
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import pipeline, AutoTokenizer
model = ORTModelForSequenceClassification.from_pretrained(save_directory, file_name="model-quantized.onnx")
tokenizer = AutoTokenizer.from_pretrained(save_directory)
cls_pipeline = pipeline("text-classification", model=model, tokenizer=tokenizer)
results = cls_pipeline("I love burritos!")
Similarly, you can apply static quantization by simply setting is_static
to True
when instantiating the QuantizationConfig
object:
qconfig = AutoQuantizationConfig.arm64(is_static=True, per_channel=False)
Static quantization relies on feeding batches of data through the model to estimate the activation quantization parameters ahead of inference time. To support this, π€ Optimum allows you to provide a calibration dataset. The calibration dataset can be a simple Dataset
object from the π€ Datasets library, or any dataset that's hosted on the Hugging Face Hub. For this example, we'll pick the sst2
dataset that the model was originally trained on:
from functools import partial
from optimum.onnxruntime.configuration import AutoCalibrationConfig
# Define the processing function to apply to each example after loading the dataset
def preprocess_fn(ex, tokenizer):
return tokenizer(ex["sentence"])
# Create the calibration dataset
calibration_dataset = quantizer.get_calibration_dataset(
"glue",
dataset_config_name="sst2",
preprocess_function=partial(preprocess_fn, tokenizer=quantizer.preprocessor),
num_samples=50,
dataset_split="train",
)
# Create the calibration configuration containing the parameters related to calibration.
calibration_config = AutoCalibrationConfig.minmax(calibration_dataset)
# Perform the calibration step: computes the activations quantization ranges
ranges = quantizer.fit(
dataset=calibration_dataset,
calibration_config=calibration_config,
onnx_model_path=onnx_path,
operators_to_quantize=qconfig.operators_to_quantize,
)
# Apply static quantization on the model
quantizer.export(
onnx_model_path=onnx_path,
onnx_quantized_model_output_path=os.path.join(save_directory, "model-quantized.onnx"),
calibration_tensors_range=ranges,
quantization_config=qconfig,
)
Then let's take a look at applying graph optimizations techniques such as operator fusion and constant folding. As before, we load a configuration object, but this time by setting the optimization level instead of the quantization approach:
from optimum.onnxruntime.configuration import OptimizationConfig
# Here the optimization level is selected to be 1, enabling basic optimizations such as redundant
# node eliminations and constant folding. Higher optimization level will result in a hardware
# dependent optimized graph.
optimization_config = OptimizationConfig(optimization_level=1)
Next, we load an optimizer to apply these optimisations to our model:
from optimum.onnxruntime import ORTOptimizer
optimizer = ORTOptimizer.from_pretrained(
model_checkpoint,
feature="sequence-classification",
)
# Export the optimized model
optimizer.export(
onnx_model_path=onnx_path,
onnx_optimized_model_output_path=os.path.join(save_directory, "model-optimized.onnx"),
optimization_config=optimization_config,
)
And that's it - the model is now optimized and ready for inference!
As you can see, the process is similar in each case:
- Define the optimization / quantization strategies via an
OptimizationConfig
/QuantizationConfig
object - Instantiate an
ORTQuantizer
orORTOptimizer
class - Apply the
export()
method - Run inference
Besides supporting ONNX Runtime inference, π€ Optimum also supports ONNX Runtime training, reducing the memory and computations needed during training. This can be achieved by using the class ORTTrainer
, which possess a similar behavior than the Trainer
of π€ Transformers:
-from transformers import Trainer
+from optimum.onnxruntime import ORTTrainer
# Step 1: Create your ONNX Runtime Trainer
-trainer = Trainer(
+trainer = ORTTrainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
compute_metrics=compute_metrics,
tokenizer=tokenizer,
data_collator=default_data_collator,
feature="sequence-classification",
)
# Step 2: Use ONNX Runtime for training and evalution!π€
train_result = trainer.train()
eval_metrics = trainer.evaluate()
By replacing Trainer
by ORTTrainer
, you will be able to leverage ONNX Runtime for fine-tuning tasks.
Check out the examples
directory for more sophisticated usage.
Happy optimizing π€!