-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Different versions of TensorRT get different model inference results on GroundingDino model #4209
Comments
Can you please try TRT 10.5? There was a known accuracy bug that was fixed in 10.5. Thanks! |
Hi @yuanyao-nv , I tried TRT 10.5, but the model still has no output box. How can this problem be solved? |
Hi @yuanyao-nv , I tried TRT 10.7, but the model output many wrong boxes. How can this problem be solved? |
@demuxin Can you please provide the exact command or script you used to run this ONNX model? |
I doesn't check model using onnxruntime, but I tested the onnx model using C++ TensorRT 8.6, the output result is correct. And I tested the same model using other version of C++ TensorRT, the output result is wrong. So the onnx model shouldn't be a problem. |
I found that there are some different warning log when build the trt engine: TensorRT 8.6:
TensorRT 10.4:
Would these warning logs be useful?
Our code has a relatively high degree of encapsulation, Extracting the code snippet out is not very convenient. Can you run the onnx model using TensorRT 10.4, I can offer you input data files . input86_0.txt |
Description
I inference the groundingDino model using C++ TensorRT.
For the same model and the same image, TensorRT 8.6 can gets the correct detection boxes.
But when I update TensorRT to 10.4, can't get detection boxes.
Possible model result error caused by TensorRT 10.4, How can I analyze this issue?
By the way, I've tried multiple versions other than 8.6 (eg 9.3, 10.0, 10.1), None of them get detection boxes.
additional information below:
I load the save onnx model via C++ TensorRT and print the information for each layer.
TensorRT 8.6 loaded a model with 21060 layers and TensorRT 10.4 loaded a model with 37921 layers, why is the difference in the number of layers so large?
rt104_layers.txt
rt86_layers.txt
Environment
TensorRT Version: 8.6.1.6 / 10.4.0.26
NVIDIA GPU: GeForce RTX 3090
NVIDIA Driver Version: 535.183.06
CUDA Version: 12.2
Relevant Files
Model link: https://drive.google.com/file/d/1VRHKT7cswtDVXNUUmebbPmBSAOyd-fJN/view?usp=drive_link
The text was updated successfully, but these errors were encountered: