-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Issues: NVIDIA/TensorRT
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
AttributeError: 'tensorrt_bindings.tensorrt.ICudaEngine' object has no attribute 'max_batch_size'
#4293
opened Dec 20, 2024 by
kratos231245
[Question] What is the recommended API for deprecated SetDynamicRange?
#4292
opened Dec 19, 2024 by
Benjamin-Tan
Internal Error (Could not find any implementation for node {...Tensordot/Reshape}) failure of TensorRT 8.6 when running generate INT8 engine on GPU RTX3080
Engine Build
Issues with engine build
quantization
Issues related to Quantization
triaged
Issue has been triaged by maintainers
#4291
opened Dec 19, 2024 by
yjiangling
Long Inference Time on First Run After Changing Input Shape in Dynamic Shape TensorRT Engine
Demo: Diffusion
Issues regarding demoDiffusion
triaged
Issue has been triaged by maintainers
#4289
opened Dec 19, 2024 by
renne444
Error Code 2: Internal Error (Assertion !mValueMapUndo failed. )(Trt10.5/10.7 trtexec convert onnx model failed)
Engine Build
Issues with engine build
internal-bug-tracked
Tracked internally, will be fixed in a future release.
triaged
Issue has been triaged by maintainers
#4288
opened Dec 18, 2024 by
TigerSong
Using trtexec with builderOptimizationLevel==5 ,i got a segmenttation(core dump),but builderOptimizationLevel==3 is right
Engine Build
Issues with engine build
triaged
Issue has been triaged by maintainers
#4286
opened Dec 17, 2024 by
peanutPod
Can I implement block quantization through tensorflow-quantization?
#4283
opened Dec 15, 2024 by
lqq-feel
[Feature request] allow uint8 output without an ICastLayer before
#4282
opened Dec 12, 2024 by
QMassoz
TensorRT 10.5.0 -- CPU Memory leak while using nvinfer1::createInferBuilder on 4060
#4281
opened Dec 12, 2024 by
Iridium771110
Conversion to TRT failure of TensorRT 8.6.1.6 when converting CO-DETR model on GPU RTX 4090
Engine Build
Issues with engine build
triaged
Issue has been triaged by maintainers
#4280
opened Dec 12, 2024 by
edwardnguyen1705
Error Code 2: Internal Error (Assertion !mValueMapUndo failed. ) failure of TensorRT 10.5 when running speechbrain language detection model on GPU NVIDIA GeForce RTX 3090
Engine Build
Issues with engine build
internal-bug-tracked
Tracked internally, will be fixed in a future release.
triaged
Issue has been triaged by maintainers
#4277
opened Dec 10, 2024 by
msublee
INT8 Quantization of dinov2 TensorRT Model is Not Faster than FP16 Quantization
quantization
Issues related to Quantization
triaged
Issue has been triaged by maintainers
#4273
opened Dec 6, 2024 by
mr-lz
how to fuse QuantizeLinear Node with my custom op when convert onnx to trtengine
ONNX
Issues relating to ONNX usage and import
triaged
Issue has been triaged by maintainers
#4270
opened Dec 5, 2024 by
AnnaTrainingG
codes inside "quickstart/common" are outdated
triaged
Issue has been triaged by maintainers
#4265
opened Nov 30, 2024 by
hamrah-cluster
How to make 4bit pytorch_quantization model export to .engine model?
triaged
Issue has been triaged by maintainers
#4262
opened Nov 26, 2024 by
StarryAzure
EfficientNMS_TRT plugin compiled by myself can not work
triaged
Issue has been triaged by maintainers
#4261
opened Nov 26, 2024 by
pango99
error in ms_deformable_im2col_cuda: invalid configuration argument
triaged
Issue has been triaged by maintainers
#4260
opened Nov 25, 2024 by
nainaigetuide
The trt model is used on different graphics cards
triaged
Issue has been triaged by maintainers
#4259
opened Nov 23, 2024 by
wahaha
Cuda Runtime (out of memory) failure of TensorRT 10.3.0 when running trtexec on GPU RTX4060/jetson/etc
#4258
opened Nov 22, 2024 by
zargooshifar
[ONNXParser] TensorRT Fails to Load ONNX Checkpoints with Separated Weight and Bias Files
#4257
opened Nov 22, 2024 by
theanh-ktmt
Polygraphy: how to compare the precision layer by layer with TensorRT if I have a custom operator in my onnx (and a corresponding plugin in TensorRT)?
Plugins
Issues when using TensorRT plugins
Tools: Polygraphy
Issues with Polygraphy
triaged
Issue has been triaged by maintainers
#4256
opened Nov 22, 2024 by
MyraYu2022
Error Code 10: Internal Error (Could not find any implementation for node failure of TensorRT 8.5 when running on GPU Jetson Xavier NX
question
Further information is requested
triaged
Issue has been triaged by maintainers
#4255
opened Nov 20, 2024 by
fettahyildizz
Previous Next
ProTip!
Exclude everything labeled
bug
with -label:bug.