Build engine failure of TensorRT 10.0 when running trtexec with fp16 on GPU NVIDIA 3060 series #3800
Labels
internal-bug-tracked
Tracked internally, will be fixed in a future release.
triaged
Issue has been triaged by maintainers
Description
I tried building a TRT engine using trtexec and
fp16 precision
, but the process fails without any errors. The TRT model is never generated. For fp32 the TRT model is correctly generated.Environment
TensorRT Version: 10.0.0.6EA
NVIDIA GPU: NVIDIA GeForce RTX 3060
NVIDIA Driver Version: 551.86
CUDA Version: 11.8
CUDNN Version: 8.9.7
Operating System: Windows10
Relevant Files
Model link: https://drive.google.com/file/d/1JjA_9Ea4oTf-jnn41pYOMolpWOGYoWay/view?usp=drive_link
Build log: build.log
Steps To Reproduce
Commands or scripts:
trtexec --onnx=best_model.onnx --saveEngine=best_model_fp16.plan --fp16 --verbose
I also tried using the build_engine.py script from the python samples directory of TensorRT10 but it returns exactly the same output as trtexec. No errors or warnings, just the process stopping abruptly.
Have you tried the latest release?: yes
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (
polygraphy run <model.onnx> --onnxrt
): yes it can run with PolygraphyThe text was updated successfully, but these errors were encountered: