Skip to content

Actions: NVIDIA/TensorRT

Actions

Blossom-CI

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
5,074 workflow runs
5,074 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

不同个数的输出导致运行结果不一致
Blossom-CI #6564: Issue comment #4284 (comment) created by lix19937
December 16, 2024 05:42 4s
December 16, 2024 05:42 4s
how to fuse QuantizeLinear Node with my custom op when convert onnx to trtengine
Blossom-CI #6563: Issue comment #4270 (comment) created by lix19937
December 16, 2024 05:33 5s
December 16, 2024 05:33 5s
How to make 4bit pytorch_quantization model export to .engine model?
Blossom-CI #6562: Issue comment #4262 (comment) created by lix19937
December 16, 2024 05:25 4s
December 16, 2024 05:25 4s
Can I implement block quantization through tensorflow-quantization?
Blossom-CI #6561: Issue comment #4283 (comment) created by lix19937
December 16, 2024 05:20 5s
December 16, 2024 05:20 5s
INT8EntropyCalibrator2 implicit quantization superseded by explicit quantization
Blossom-CI #6557: Issue comment #4095 (comment) created by moraxu
December 13, 2024 19:52 5s
December 13, 2024 19:52 5s
Polygraphy GPU memory leak when processing a large enough number of images
Blossom-CI #6556: Issue comment #3791 (comment) created by ludekcizinsky
December 13, 2024 12:00 5s
December 13, 2024 12:00 5s
incompatible types Int64 and Int32
Blossom-CI #6555: Issue comment #4268 (comment) created by antithing
December 13, 2024 07:18 4s
December 13, 2024 07:18 4s
incompatible types Int64 and Int32
Blossom-CI #6554: Issue comment #4268 (comment) created by LeoZDong
December 13, 2024 00:14 5s
December 13, 2024 00:14 5s
[Feature request] allow uint8 output without an ICastLayer before
Blossom-CI #6553: Issue comment #4282 (comment) created by QMassoz
December 12, 2024 14:47 5s
December 12, 2024 14:47 5s
[Feature request] allow uint8 output without an ICastLayer before
Blossom-CI #6552: Issue comment #4278 (comment) created by QMassoz
December 12, 2024 14:43 4s
December 12, 2024 14:43 4s
Blossom-CI
Blossom-CI #6551: created by QMassoz
December 12, 2024 14:43 5s
December 12, 2024 14:43 5s
TopK 3840 limitation and future plans for this operator
Blossom-CI #6549: Issue comment #4244 (comment) created by amadeuszsz
December 12, 2024 11:26 5s
December 12, 2024 11:26 5s
Polygraphy GPU memory leak when processing a large enough number of images
Blossom-CI #6548: Issue comment #3791 (comment) created by michaeldeyzel
December 11, 2024 09:17 6s
December 11, 2024 09:17 6s
How to make 4bit pytorch_quantization model export to .engine model?
Blossom-CI #6547: Issue comment #4262 (comment) created by StarryAzure
December 11, 2024 07:20 6s
December 11, 2024 07:20 6s
converting to TensorRT barely increases performance
Blossom-CI #6546: Issue comment #3646 (comment) created by watertianyi
December 11, 2024 07:14 4s
December 11, 2024 07:14 4s
TensorRT8.6.1.6 Inference cost too much time
Blossom-CI #6545: Issue comment #3993 (comment) created by watertianyi
December 11, 2024 06:16 5s
December 11, 2024 06:16 5s
TensorRT8.6.1.6 Inference cost too much time
Blossom-CI #6544: Issue comment #3993 (comment) created by xxHn-pro
December 11, 2024 04:00 4s
December 11, 2024 04:00 4s
INT8 Quantization of dinov2 TensorRT Model is Not Faster than FP16 Quantization
Blossom-CI #6543: Issue comment #4273 (comment) created by lix19937
December 11, 2024 00:44 5s
December 11, 2024 00:44 5s
Is there a plan to support more recent PTQ methods for INT8 ViT?
Blossom-CI #6542: Issue comment #4276 (comment) created by lix19937
December 11, 2024 00:41 5s
December 11, 2024 00:41 5s
Disable/Enable graph level optimizations
Blossom-CI #6541: Issue comment #4275 (comment) created by lix19937
December 11, 2024 00:40 4s
December 11, 2024 00:40 4s
December 10, 2024 21:39 5s