You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We recently converted a model similar to NAFNet into a TensorRT model, but we encountered two issues:
The fp16 model's inference results are incorrect while the inference results of the fp32 model are correct.
The fp32 model occupies a large amount of storage space, more than 80 times that of the original ONNX model.
The first issue is more urgent, and we hope you can assist us in resolving it. Thank you!
Description
We recently converted a model similar to NAFNet into a TensorRT model, but we encountered two issues:
The first issue is more urgent, and we hope you can assist us in resolving it. Thank you!
Environment
TensorRT Version: 8.2.2.1
NVIDIA GPU:Tesla T4
NVIDIA Driver Version:450.36.06
CUDA Version:11.0
CUDNN Version:8.0.0
Relevant Files
onnx model
test image
correct result image
incorrect result image
inference code of onnx
Steps To Reproduce
Run inference code of onnx, you could get the correct result image.
Then convert the onnx model into tensorrt fp32/fp16 model, inference this two models get the results.
The text was updated successfully, but these errors were encountered: