Ahead of Time (AOT) compiling for PyTorch JIT and FX
Torch-TensorRT is a compiler for PyTorch/TorchScript/FX, targeting NVIDIA GPUs via NVIDIA's TensorRT Deep Learning Optimizer and Runtime. Unlike PyTorch's Just-In-Time (JIT) compiler, Torch-TensorRT is an Ahead-of-Time (AOT) compiler, meaning that before you deploy your TorchScript code, you go through an explicit compile step to convert a standard TorchScript or FX program into an module targeting a TensorRT engine. Torch-TensorRT operates as a PyTorch extension and compiles modules that integrate into the JIT runtime seamlessly. After compilation using the optimized graph should feel no different than running a TorchScript module. You also have access to TensorRT's suite of configurations at compile time, so you are able to specify operating precision (FP32/FP16/INT8) and other settings for your module.
Resources:
- Documentation
- FX path Documentation
- Torch-TensorRT Explained in 2 minutes!
- Comprehensive Discussion (GTC Event)
- Pre-built Docker Container. To use this container, make an NGC account and sign in to NVIDIA's registry with an API key. Refer to this guide for the same.
Torch-TensorRT is distributed in the ready-to-run NVIDIA NGC PyTorch Container starting with 21.11. We recommend using this prebuilt container to experiment & develop with Torch-TensorRT; it has all dependencies with the proper versions as well as example notebooks included.
We provide a Dockerfile
in docker/
directory. It expects a PyTorch NGC container as a base but can easily be modified to build on top of any container that provides, PyTorch, CUDA, and TensorRT. The dependency libraries in the container can be found in the release notes.
Please follow this instruction to build a Docker container.
docker build --build-arg BASE=<CONTAINER VERSION e.g. 21.11> -f docker/Dockerfile -t torch_tensorrt:latest .
In the case of building on top of a custom base container, you first must determine the
version of the PyTorch C++ ABI. If your source of PyTorch is pytorch.org, likely this is the pre-cxx11-abi in which case you must modify //docker/dist-build.sh
to not build the
C++11 ABI version of Torch-TensorRT.
You can then build the container using the build command in the docker README
If you would like to build outside a docker container, please follow the section Compiling Torch-TensorRT
#include "torch/script.h"
#include "torch_tensorrt/torch_tensorrt.h"
...
// Set input datatypes. Allowed options torch::{kFloat, kHalf, kChar, kInt32, kBool}
// Size of input_dtypes should match number of inputs to the network.
// If input_dtypes is not set, default precision follows traditional PyT / TRT rules
auto input = torch_tensorrt::Input(dims, torch::kHalf);
auto compile_settings = torch_tensorrt::ts::CompileSpec({input});
// FP16 execution
compile_settings.enabled_precisions = {torch::kHalf};
// Compile module
auto trt_mod = torch_tensorrt::ts::compile(ts_mod, compile_settings);
// Run like normal
auto results = trt_mod.forward({in_tensor});
// Save module for later
trt_mod.save("trt_torchscript_module.ts");
...
import torch_tensorrt
...
trt_ts_module = torch_tensorrt.compile(torch_script_module,
# If the inputs to the module are plain Tensors, specify them via the `inputs` argument:
inputs = [example_tensor, # Provide example tensor for input shape or...
torch_tensorrt.Input( # Specify input object with shape and dtype
min_shape=[1, 3, 224, 224],
opt_shape=[1, 3, 512, 512],
max_shape=[1, 3, 1024, 1024],
# For static size shape=[1, 3, 224, 224]
dtype=torch.half) # Datatype of input tensor. Allowed options torch.(float|half|int8|int32|bool)
],
# For inputs containing tuples or lists of tensors, use the `input_signature` argument:
# Below, we have an input consisting of a Tuple of two Tensors (Tuple[Tensor, Tensor])
# input_signature = ( (torch_tensorrt.Input(shape=[1, 3, 224, 224], dtype=torch.half),
# torch_tensorrt.Input(shape=[1, 3, 224, 224], dtype=torch.half)), ),
enabled_precisions = {torch.half}, # Run with FP16
)
result = trt_ts_module(input_data) # run inference
torch.jit.save(trt_ts_module, "trt_torchscript_module.ts") # save the TRT embedded Torchscript
Notes on running in lower precisions:
- Enabled lower precisions with compile_spec.enabled_precisions
- The module should be left in FP32 before compilation (FP16 can support half tensor models)
- Provided input tensors dtype should be the same as module before compilation, regardless of
enabled_precisions
. This can be overrided by settingInput::dtype
Platform | Support |
---|---|
Linux AMD64 / GPU | Supported |
Linux aarch64 / GPU | Native Compilation Supported on JetPack-4.4+ (use v1.0.0 for the time being) |
Linux aarch64 / DLA | Native Compilation Supported on JetPack-4.4+ (use v1.0.0 for the time being) |
Windows / GPU | Unofficial Support |
Linux ppc64le / GPU | - |
NGC Containers | Included in PyTorch NGC Containers 21.11+ |
Torch-TensorRT will be included in NVIDIA NGC containers (https://ngc.nvidia.com/catalog/containers/nvidia:pytorch) starting in 21.11.
Note: Refer NVIDIA NGC container(https://ngc.nvidia.com/catalog/containers/nvidia:l4t-pytorch) for PyTorch libraries on JetPack.
These are the following dependencies used to verify the testcases. Torch-TensorRT can work with other versions, but the tests are not guaranteed to pass.
- Bazel 5.2.0
- Libtorch 2.4.0.dev (latest nightly) (built with CUDA 12.1)
- CUDA 12.1
- TensorRT 10.0.1.6
Releases: https://github.com/pytorch/TensorRT/releases
pip install tensorrt torch-tensorrt
If you don't have bazel installed, the easiest way is to install bazelisk using the method of you choosing https://github.com/bazelbuild/bazelisk
Otherwise you can use the following instructions to install binaries https://docs.bazel.build/versions/master/install.html
Finally if you need to compile from source (e.g. aarch64 until bazel distributes binaries for the architecture) you can use these instructions
export BAZEL_VERSION=<VERSION>
mkdir bazel
cd bazel
curl -fSsL -O https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-dist.zip
unzip bazel-$BAZEL_VERSION-dist.zip
bash ./compile.sh
You need to start by having CUDA installed on the system, LibTorch will automatically be pulled for you by bazel, then you have two options.
This is recommended so as to build Torch-TensorRT hermetically and insures any bugs are not caused by version issues
Make sure when running Torch-TensorRT that these versions of the libraries are prioritized in your
$LD_LIBRARY_PATH
- You need to download the tarball distributions of TensorRT from the NVIDIA website.
- Place these files in a directory (the directories
third_party/dist_dir/[x86_64-linux-gnu | aarch64-linux-gnu]
exist for this purpose) - Compile using:
bazel build //:libtorchtrt --compilation_mode opt --distdir third_party/dist_dir/[x86_64-linux-gnu | aarch64-linux-gnu]
If you find bugs and you compiled using this method please disclose you used this method in the issue (an
ldd
dump would be nice too)
- Install TensorRT and CUDA on the system before starting to compile.
- In
WORKSPACE
comment out
# Downloaded distributions to use with --distdir
http_archive(
name = "tensorrt",
urls = ["<URL>",],
build_file = "@//third_party/tensorrt/archive:BUILD",
sha256 = "<TAR SHA256>",
strip_prefix = "TensorRT-<VERSION>"
)
and uncomment
# Locally installed dependencies
new_local_repository(
name = "tensorrt",
path = "/usr/",
build_file = "@//third_party/tensorrt/local:BUILD"
)
- Compile using:
bazel build //:libtorchtrt --compilation_mode opt
If the user plans to try FX path (Python only) and would like to avoid bazel build. Please follow the steps below.
cd py && python3 setup.py install --fx-only
bazel build //:libtorchtrt --compilation_mode=dbg
We performed end to end testing on Jetson platform using Jetpack SDK 4.6.
bazel build //:libtorchtrt --platforms //toolchains:jetpack_4.6
Note: Please refer installation instructions for Pre-requisites
A tarball with the include files and library can then be found in bazel-bin
Make sure to add LibTorch to your LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/bazel-TensorRT/external/libtorch/lib
bazel run //cpp/bin/torchtrtc -- $(realpath <PATH TO GRAPH>) out.ts <input-size>
To compile the python package for your local machine, just run python3 setup.py install
in the //py
directory.
To build wheel files for different python versions, first build the Dockerfile in //py
then run the following
command
docker run -it -v$(pwd)/..:/workspace/Torch-TensorRT build_torch_tensorrt_wheel /bin/bash /workspace/Torch-TensorRT/py/build_whl.sh
Python compilation expects using the tarball based compilation strategy from above.
Torch-TensorRT supports testing in Python using nox
To install the nox using python-pip
python3 -m pip install --upgrade nox
To list supported nox sessions:
nox --session -l
Environment variables supported by nox
PYT_PATH - To use different PYTHONPATH than system installed Python packages
TOP_DIR - To set the root directory of the noxfile
USE_CXX11 - To use cxx11_abi (Defaults to 0)
USE_HOST_DEPS - To use host dependencies for tests (Defaults to 0)
Usage example
nox --session l0_api_tests
Supported Python versions:
["3.7", "3.8", "3.9", "3.10"]
Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. It's preferred to use graph rewriting because then we do not need to maintain a large library of op converters. Also do look at the various op support trackers in the issues for information on the support status of various operators.
The Node Converter Registry is not exposed in the top level API but in the internal headers shipped with the tarball.
You can register a converter for your op using the NodeConverterRegistry
inside your application.
Component | Description |
---|---|
core | Main JIT ingest, lowering, conversion and runtime implementations |
cpp | C++ API and CLI source |
examples | Example applications to show different features of Torch-TensorRT |
py | Python API for Torch-TensorRT |
tests | Unit tests for Torch-TensorRT |
Take a look at the CONTRIBUTING.md
The Torch-TensorRT license can be found in the LICENSE file. It is licensed with a BSD Style licence