-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Just one element of a batch is correct in TensorRT 8.6.1.6 #3689
Comments
Thanks! |
Yes, 0th index. Marked as
Needs pip |
I also encountered the same error! My trt engine works well in TensorRT7.1/TensorRT8.5, but not in TensorRT8.6.... I also use dynamic shapes inputs, and use multi-context in different threads. When batch=1, the result is correct. When batch > 1, the all results are wrong. |
@zerollzeng |
The diff look good(<1e-5) to me, the reason why it fails is polygraphy use a strict tolerance for the output diff((rel=1e-05, abs=1e-05))
|
So, then do you have any ideas why batching doesn't work? Does random in the network may cause such error? In our nets we have random, which is moved as input in this |
This problem exists in inference of two different networks. Attaching onnx file of one of them and a command, that we used to convert
|
@zerollzeng
|
I did a quick check with this model, it passed with polygraphy
|
Since they are good at TRT 7, I guess some api usage error may lead to this, maybe check the TRT 8 release note? |
closing since no activity for more than 3 weeks, pls reopen if you still have question, thanks all! |
Description
Hello!
I have a pipeline, that gets TRT engine from a torch checkpoint, which works fine for Cuda11.4 && TensorRT-7.2.3.4-1.cuda11.1. When I tried to upgrade GPU libs (and TRT engine), I met some strange error during inference using TRT engine: when batch-size > 1, result of the only one element of the batch is correct (AFAIK the first element). Here are the used versions
Conversion of onnx (skipping
torch->onnx
conversion), that is used in TensorRT7, also didn't helpEnvironment
TensorRT Version: 8.6.1.6-1.cuda12.0
NVIDIA GPU: Tesla V100S
NVIDIA Driver Version: 525.147.05
CUDA Version: 12.0
CUDNN Version: 8.9.7
Operating System: rhel8
Python Version: 3.8
PyTorch Version: 2
Changed just versions of tools/libs, code of conversion (in Python) and inference (in C++) are the same for both TensorRT7 and TensorRT8
Can you help with questions, please?
The text was updated successfully, but these errors were encountered: