You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I work in the broadcast sector, where frames processed by our TensorRT (TRT) engines can have various pixel formats (encoded or decoded, bit depths, color spaces, etc).
We developed a custom codec Plugin that converts all those pixel formats to and from fp16/fp32, enabling TRT to process these frames. This codec layer accepts a format input that specifies the pixel format of the input, allowing the plugin to determine the appropriate codec for conversion. This codec layer is inserted at the beginning and the end of our TRT engines.
Implementing this plugin with uint8 inputs and outputs simplifies the design and results in a cleaner implementation.
Problem
During the network-building stage, the following error is encountered:
Error[4]: IBuilder::buildSerializedNetwork: Error Code 4: API Usage Error (Network-level output tensor output has datatype UInt8 but is not produced by an IIdentityLayer or ICastLayer.)
Request
Provide a mechanism to allow my custom plugin to produce uint8 network outputs. There is no need for a casting layer here.
Dirty workaround
Our current workaround involves bypassing TRT's restrictions by misrepresenting the uint8 byte array as an fp16 array with half the number of elements. While this approach allows the engine to build, it is not ideal:
We serve the TRT engine via Triton using the TensorRT backend.
The TRT datatype determines the datatype specified in the config.pbtxt.
This datatype propagates to the Triton client, leading to potential discrepancies or confusion.
A proper solution would remove the need for this workaround and ensure clean, consistent datatype handling.
The text was updated successfully, but these errors were encountered:
I would like to extend the request to more TensorFormat and DataType.
Fix the confusing documentation.
Let's focus on kHWC kHALF and kHWC kFLOAT.
According to c++ API and python API, both kHWC kHALF and kHWC kFLOAT are valid and supported.
According to section 6.10 of developer guide, kHWC kHALF is not supported but kHWC kFLOAT is.
According to section 10.7.1 of developer guide, neither kHWC kHALF nor kHWC kFLOAT are supported.
When developing a custom plugin, kHWC kFLOAT is supported but kHWC kHALF is not (IBuilder::buildSerializedNetwork: Error Code 9: Internal Error (/MyPlugin: could not find any supported formats consistent with input/output data types))
Allow a custom plugin to produce any combination of TensorFormat & DataType that are supported by internals of TensorRT.
Context
I work in the broadcast sector, where frames processed by our TensorRT (TRT) engines can have various pixel formats (encoded or decoded, bit depths, color spaces, etc).
We developed a custom codec Plugin that converts all those pixel formats to and from
fp16
/fp32
, enabling TRT to process these frames. This codec layer accepts aformat
input that specifies the pixel format of the input, allowing the plugin to determine the appropriate codec for conversion. This codec layer is inserted at the beginning and the end of our TRT engines.Implementing this plugin with
uint8
inputs and outputs simplifies the design and results in a cleaner implementation.Problem
During the network-building stage, the following error is encountered:
Request
Provide a mechanism to allow my custom plugin to produce
uint8
network outputs. There is no need for a casting layer here.Dirty workaround
Our current workaround involves bypassing TRT's restrictions by misrepresenting the
uint8
byte array as anfp16
array with half the number of elements. While this approach allows the engine to build, it is not ideal:A proper solution would remove the need for this workaround and ensure clean, consistent datatype handling.
The text was updated successfully, but these errors were encountered: