You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi all. I am currently trying to convert Faster RCNN models from TF2 Model Zoo from saved model to TensorRT engine. It is mainly related to Unsupported ONNX data type: UINT8(2) error but I solved it using the following snippet from ELinux: Unsupported ONNX data type: UINT8(2) Fix.
Now I'm having another issue after solving it when I try to convert fixed ONNX model to TensorRT engine using trtexec. I get the following errors:
[E] [TRT] 3: getPluginCreator could not find plugin: Round version: 1
[E] [TRT] ModelImporter.cpp:720: While parsing node number 12 [If -> "StatefulPartitionedCall/Preprocessor/ResizeToRange/cond:0"]:
[E] [TRT] ModelImporter.cpp:721: --- Begin node ---
[E] [TRT] ModelImporter.cpp:722: input: "StatefulPartitionedCall/Preprocessor/ResizeToRange/Less:0"
...
[8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"`
Environment
TensorRT Version: 8.0.3 ONNX-TensorRT Version / Branch: GPU Type: RTX 3090 Nvidia Driver Version: 470 CUDA Version: 11.3 CUDNN Version: 8.2.4 Operating System + Version: Ubuntu 20.04 Python Version (if applicable): 3.8 TensorFlow + TF2ONNX Version (if applicable): TF 2.6.0 + TF2ONN 1.9.3/1190aa PyTorch Version (if applicable): Baremetal or Container (if container which image + tag): nvcr.io/nvidia/tensorflow:21.10-tf2-py3
TensorRT does not support the Round operation yet. TRT will attempt to load user-defined plugins if an operator is not supported, which is why the following error message appears.
[E] [TRT] 3: getPluginCreator could not find plugin: Round version: 1
The operator will be supported in a future release
Description
Hi all. I am currently trying to convert Faster RCNN models from TF2 Model Zoo from saved model to TensorRT engine. It is mainly related to
Unsupported ONNX data type: UINT8(2)
error but I solved it using the following snippet from ELinux: Unsupported ONNX data type: UINT8(2) Fix.Now I'm having another issue after solving it when I try to convert fixed ONNX model to TensorRT engine using
trtexec
. I get the following errors:Environment
TensorRT Version: 8.0.3
ONNX-TensorRT Version / Branch:
GPU Type: RTX 3090
Nvidia Driver Version: 470
CUDA Version: 11.3
CUDNN Version: 8.2.4
Operating System + Version: Ubuntu 20.04
Python Version (if applicable): 3.8
TensorFlow + TF2ONNX Version (if applicable): TF 2.6.0 + TF2ONN 1.9.3/1190aa
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): nvcr.io/nvidia/tensorflow:21.10-tf2-py3
Relevant Files
Steps To Reproduce
From SavedModel to ONNX
python3 -m tf2onnx.convert --saved-model faster_rcnn_resnet50_v1_800x1333_coco17_gpu-8/saved_model --output faster_rcnn_resnet50_v1_800x133_trt.onnx --opset 11 --signature_def serving_default --tag serve --target tensorrt
From ONNX to TensorRT
trtexec --onnx=/root/tf2_models/faster_rcnn_resnet50_v1_800x1333_trt/faster_rcnn_resnet50_v1_800x133_trt.onnx --fp16 --buildOnly --saveEngine=/root/tf2_models/faster_rcnn_resnet50_v1_800x1333_trt/faster_rcnn_resnet50_v1_800x133_trt.trt
The text was updated successfully, but these errors were encountered: