'ValueError: only one element tensors can be converted to Python scalars' failure of TensorRT 8.2 when running NVIDIA DALI on GPU V100 #3597
Labels
triaged
Issue has been triaged by maintainers
Description
Hi Nvidia Team,
I'm testing run a DNN workflow(SSD+Resnet50) with TensorRT and NVIDIA DALI(for preprocess data). I convert a Pytorch SSD pretrained model to onnx format and load it to build a TensorRT engine.
However, when I test run SSD inference, I do not know how to adapt other framework's data into TensorRT's input. For example, how to convert NVIDIA DALI's nvidia.dali.backend_impl.TensorGPU into tensorrt.tensorrt.IExecutionContext.execute()'s input? If this is difficult, is it possible to convert a 'torch.Tensor' as input data of TensorRT?
Environment
TensorRT Version: 8.2.5.1
NVIDIA GPU: V100
NVIDIA Driver Version: 520.61.05
CUDA Version: 11.8
CUDNN Version: 8401
Operating System: 4.15.0-45-generic #48-Ubuntu
Python Version (if applicable): 3.8.13
Tensorflow Version (if applicable): N/A
PyTorch Version (if applicable): 1.13.0
Baremetal or Container (if so, version): nvcr.io/nvidia/pytorch:22.06-py3
Relevant Files
Model link:
Steps To Reproduce
Commands or scripts:
Have you tried the latest release?: no
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (
polygraphy run <model.onnx> --onnxrt
): do not testThe text was updated successfully, but these errors were encountered: