The TensorRT execution provider in the ONNX Runtime makes use of NVIDIA's TensortRT Deep Learning inferencing engine to accelerate ONNX model in their family of GPUs. Microsoft and NVIDIA worked closely to integrate the TensorRT execution provider with ONNX Runtime.
With the TensorRT execution provider, the ONNX Runtime delivers better inferencing performance on the same hardware compared to generic GPU acceleration.
For build instructions, please see the BUILD page.
The TensorRT execution provider for ONNX Runtime is built and tested with TensorRT 6.0.1.5 but validated with the feature set equivalent to TensorRT 5. Some TensorRT 6 new features such as dynamic shape is not available as this time.
The TensortRT execution provider needs to be registered with ONNX Runtime to enable in the inference session.
InferenceSession session_object{so};
session_object.RegisterExecutionProvider(std::make_unique<::onnxruntime::TensorrtExecutionProvider>());
status = session_object.Load(model_file_name);
The C API details are here.
When using the Python wheel from the ONNX Runtime build with TensorRT execution provider, it will be automatically prioritized over the default GPU or CPU execution providers. There is no need to separately register the execution provider. Python APIs details are here.
Please see this Notebook for an example of running a model on GPU using ONNX Runtime through Azure Machine Learning Services.
For performance tuning, please see guidance on this page: ONNX Runtime Perf Tuning
When/if using onnxruntime_perf_test, use the flag -e tensorrt
By default TensorRT execution provider builds an ICudaEngine with max batch size = 1 and max workspace size = 1 GB One can override these defaults by setting environment variables ORT_TENSORRT_MAX_BATCH_SIZE and ORT_TENSORRT_MAX_WORKSPACE_SIZE. e.g. on Linux
export ORT_TENSORRT_MAX_BATCH_SIZE=10
export ORT_TENSORRT_MAX_WORKSPACE_SIZE=2147483648