Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error using C# tensorRT EP builded from source #8367

Open
mrljwlm opened this issue Jul 13, 2021 · 9 comments
Open

error using C# tensorRT EP builded from source #8367

mrljwlm opened this issue Jul 13, 2021 · 9 comments
Labels
ep:TensorRT issues related to TensorRT execution provider

Comments

@mrljwlm
Copy link

mrljwlm commented Jul 13, 2021

I build the onnxruntime with tensorRT from the master branch using the following command:
.\build.bat --config Release --build_nuget --parallel --build_shared_lib --cudnn_home "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0" --cuda_home "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0" --use_tensorrt --tensorrt_home "I:\python-tensorflow-pytorch安装包\TensorRT-7.2.2.3" --cuda_version 11.0 --cmake_generator "Visual Studio 16 2019" --skip_tests

I generate the following two nupkg:
Microsoft.ML.OnnxRuntime.Managed.1.8.0-dev-20210711-0335-b7c9696ac.nupkg
Microsoft.ML.OnnxRuntime.TensorRT.1.8.0-dev-20210711-0335-b7c9696ac.nupkg

I install these two nupkg in visual studio and run the following code:


using System;
using System.IO;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.ML.OnnxRuntime;
using Microsoft.ML.OnnxRuntime.Tensors;
namespace ConsoleApp1
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("start testing onnxruntime tensorRT");
            InferenceSession session;
            string filename = "model.onnx";
            Console.WriteLine(filename);
            session = new InferenceSession(filename, SessionOptions.MakeSessionOptionWithTensorrtProvider(0));
        }
    }
}

the following error was occured:

Unhandled Exception: System.TypeInitializationException: The type initializer for 'Microsoft.ML.OnnxRuntime.NativeMethods' threw an exception. ---> System.BadImageFormatException: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B)
at Microsoft.ML.OnnxRuntime.NativeMethods.OrtGetApiBase()
at Microsoft.ML.OnnxRuntime.NativeMethods..cctor()
--- End of inner exception stack trace ---
at Microsoft.ML.OnnxRuntime.SessionOptions..ctor()
at Microsoft.ML.OnnxRuntime.SessionOptions.MakeSessionOptionWithTensorrtProvider(Int32 deviceId)
at ConsoleApp1.Program.Main(String[] args) in D:\DNN_GPU_cuda\ConsoleApp1\ConsoleApp1\Program.cs:line 28

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04) : win10
  • ONNX Runtime installed from (source or binary): master source branch
  • ONNX Runtime version: 1.8.0
  • Python version: 3.8
  • Visual Studio version (if applicable): vs2017
  • GCC/Compiler version (if compiling from source):
  • CUDA/cuDNN version: 11.088.0
  • GPU model and memory: RTX3090
@yuslepukhin yuslepukhin added the ep:TensorRT issues related to TensorRT execution provider label Jul 13, 2021
@yuslepukhin
Copy link
Member

Seems that one of the binaries that should be loaded (and there are many of them) got corrupted. I see that you --skip_tests. Running tests would be a good indicator.

@jywu-msft
Copy link
Member

+@chilo-ms for assistance

@mrljwlm
Copy link
Author

mrljwlm commented Jul 14, 2021

Seems that one of the binaries that should be loaded (and there are many of them) got corrupted. I see that you --skip_tests. Running tests would be a good indicator.

running tests will occured error, part of the error information as follows:

...
1: [ OK ] QLinearConvTest.Conv2D_U8S8_Depthwise (39 ms)
1: [ RUN ] QLinearConvTest.Conv2D_U8U8_Depthwise
1: [ OK ] QLinearConvTest.Conv2D_U8U8_Depthwise (26 ms)
1: [ RUN ] QLinearConvTest.Conv2D_U8S8_DepthwisePointwise
1: Unsupported ONNX data type: UINT8 (2)
...
1: 2021-07-13 16:08:44.1796449 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.1796770 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 9 to type 16
1: 2021-07-13 16:08:44.2260703 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.2261029 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 1 to type 16
1: 2021-07-13 16:08:44.2724958 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.2725291 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 11 to type 16
1: 2021-07-13 16:08:44.3219217 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.3219555 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 2 to type 16
1: 2021-07-13 16:08:44.3708832 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.3709187 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 4 to type 16
1: 2021-07-13 16:08:44.4200131 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.4200466 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 12 to type 16
1: 2021-07-13 16:08:44.4695704 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.4696016 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 13 to type 16
1: 2021-07-13 16:08:44.5172230 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.5172555 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 3 to type 16
1: 2021-07-13 16:08:44.5647969 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.5648288 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 5 to type 16
1: 2021-07-13 16:08:44.6134344 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.6134671 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 6 to type 16
1: 2021-07-13 16:08:44.6615399 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.6615729 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 7 to type 16
1: 2021-07-13 16:08:44.7100515 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.7100838 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 10 to type 16
1: 2021-07-13 16:08:44.7140533 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.7140807 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 16 to type 9
1: 2021-07-13 16:08:44.7180226 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.7180493 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 16 to type 1
1: 2021-07-13 16:08:44.7219374 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.7219652 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 16 to type 11
1: 2021-07-13 16:08:44.7258168 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.7258443 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 16 to type 2
1: 2021-07-13 16:08:44.7296677 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.7296939 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 16 to type 4
1: 2021-07-13 16:08:44.7340322 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.7340663 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 16 to type 12
1: 2021-07-13 16:08:44.7386099 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.7386398 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 16 to type 13
1: 2021-07-13 16:08:44.7425801 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.7426070 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 16 to type 3
1: 2021-07-13 16:08:44.7464513 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.7464777 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 16 to type 5
1: 2021-07-13 16:08:44.7505031 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.7505341 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 16 to type 6
1: 2021-07-13 16:08:44.7547622 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.7547909 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 16 to type 7
1: 2021-07-13 16:08:44.7587389 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.7587655 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 16 to type 10
1: 2021-07-13 16:08:44.7626171 [E:onnxruntime:Cast:Cast, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:44.7626430 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running Cast node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Google Test trace:
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\cpu\tensor\cast_op_test.cc(99): Cast from type 16 to type 16
1: [ FAILED ] CastOpTest.NonStringTypes (632 ms)
1: [ RUN ] CastOpTest.FromString
1: [ OK ] CastOpTest.FromString (1 ms)
1: [ RUN ] CastOpTest.ToString
1: [ OK ] CastOpTest.ToString (1 ms)
1: [----------] 3 tests from CastOpTest (635 ms total)
...
1: [ RUN ] ConcatOpTest.Concat3D_3
1: [ OK ] ConcatOpTest.Concat3D_3 (173 ms)
1: [ RUN ] ConcatOpTest.Concat4D_1
1: [ OK ] ConcatOpTest.Concat4D_1 (238 ms)
1: [ RUN ] ConcatOpTest.Concat4D_1_negative_axis
1: [ OK ] ConcatOpTest.Concat4D_1_negative_axis (237 ms)
1: [ RUN ] ConcatOpTest.Concat4D_2
1: [ OK ] ConcatOpTest.Concat4D_2 (238 ms)
1: [----------] 16 tests from ConcatOpTest (2784 ms total)
1:
1: [ OK ] GatherNDOpTest.float (10 ms)
1: [ RUN ] GatherNDOpTest.double
1: Unsupported ONNX data type: DOUBLE (11)
1: 2021-07-13 16:08:49.6587210 [W:onnxruntime:Default, tensorrt_execution_provider.cc:1082 onnxruntime::TensorrtExecutionProvider::GetCapability] [TensorRT EP] No graph will run on TensorRT exeuction provider
1: 2021-07-13 16:08:49.6619009 [E:onnxruntime:GatherND:GatherND, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running GatherND node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:49.6619514 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running GatherND node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running GatherND node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Unsupported ONNX data type: DOUBLE (11)
1: 2021-07-13 16:08:49.6628557 [W:onnxruntime:Default, tensorrt_execution_provider.cc:1082 onnxruntime::TensorrtExecutionProvider::GetCapability] [TensorRT EP] No graph will run on TensorRT exeuction provider
1: Unsupported ONNX data type: DOUBLE (11)
1: 2021-07-13 16:08:49.6634515 [W:onnxruntime:Default, tensorrt_execution_provider.cc:1082 onnxruntime::TensorrtExecutionProvider::GetCapability] [TensorRT EP] No graph will run on TensorRT exeuction provider
1: Unsupported ONNX data type: DOUBLE (11)
1: 2021-07-13 16:08:49.6640736 [W:onnxruntime:Default, tensorrt_execution_provider.cc:1082 onnxruntime::TensorrtExecutionProvider::GetCapability] [TensorRT EP] No graph will run on TensorRT exeuction provider
1: 2021-07-13 16:08:49.6670119 [E:onnxruntime:GatherND:GatherND, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running GatherND node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:49.6670679 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running GatherND node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running GatherND node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Unsupported ONNX data type: DOUBLE (11)
1: 2021-07-13 16:08:49.6680455 [W:onnxruntime:Default, tensorrt_execution_provider.cc:1082 onnxruntime::TensorrtExecutionProvider::GetCapability] [TensorRT EP] No graph will run on TensorRT exeuction provider
1: Unsupported ONNX data type: DOUBLE (11)
1: 2021-07-13 16:08:49.6686941 [W:onnxruntime:Default, tensorrt_execution_provider.cc:1082 onnxruntime::TensorrtExecutionProvider::GetCapability] [TensorRT EP] No graph will run on TensorRT exeuction provider
1: [ FAILED ] GatherNDOpTest.double (10 ms)
1: [ RUN ] GatherNDOpTest.int8_t
1: 2021-07-13 16:08:49.6694056 [E:onnxruntime:Default, tensorrt_execution_provider.h:51 onnxruntime::TensorrtLogger::log] [2021-07-13 08:08:48 ERROR] INVALID_ARGUMENT: getPluginCreator could not find plugin GatherND version 1
...
1: [ RUN ] GatherNDOpTest.GatherND_negative_slice_float_batch_dims_two
1: 2021-07-13 16:08:49.7105507 [E:onnxruntime:Default, tensorrt_execution_provider.h:51 onnxruntime::TensorrtLogger::log] [2021-07-13 08:08:48 ERROR] INVALID_ARGUMENT: getPluginCreator could not find plugin GatherND version 1
1: 2021-07-13 16:08:49.7105963 [W:onnxruntime:Default, tensorrt_execution_provider.cc:1082 onnxruntime::TensorrtExecutionProvider::GetCapability] [TensorRT EP] No graph will run on TensorRT exeuction provider
1: [ OK ] GatherNDOpTest.GatherND_negative_slice_float_batch_dims_two (3 ms)
1: [ RUN ] GatherNDOpTest.GatherND_slice_double_batch_dims_one_1
1: 2021-07-13 16:08:49.7135315 [E:onnxruntime:GatherND:GatherND, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running GatherND node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:49.7135830 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running GatherND node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running GatherND node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Unsupported ONNX data type: DOUBLE (11)
1: 2021-07-13 16:08:49.7145460 [W:onnxruntime:Default, tensorrt_execution_provider.cc:1082 onnxruntime::TensorrtExecutionProvider::GetCapability] [TensorRT EP] No graph will run on TensorRT exeuction provider
1: [ FAILED ] GatherNDOpTest.GatherND_slice_double_batch_dims_one_1 (3 ms)
1: [ RUN ] GatherNDOpTest.GatherND_slice_double_default_batch_dims
1: 2021-07-13 16:08:49.7175404 [E:onnxruntime:GatherND:GatherND, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running GatherND node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:49.7175887 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running GatherND node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running GatherND node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Unsupported ONNX data type: DOUBLE (11)
1: 2021-07-13 16:08:49.7185019 [W:onnxruntime:Default, tensorrt_execution_provider.cc:1082 onnxruntime::TensorrtExecutionProvider::GetCapability] [TensorRT EP] No graph will run on TensorRT exeuction provider
1: [ FAILED ] GatherNDOpTest.GatherND_slice_double_default_batch_dims (3 ms)
1: [ RUN ] GatherNDOpTest.GatherND_slice_double_batch_dims_one_2
1: 2021-07-13 16:08:49.7214099 [E:onnxruntime:GatherND:GatherND, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running GatherND node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:49.7214609 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running GatherND node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running GatherND node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: Unsupported ONNX data type: DOUBLE (11)
1: 2021-07-13 16:08:49.7223700 [W:onnxruntime:Default, tensorrt_execution_provider.cc:1082 onnxruntime::TensorrtExecutionProvider::GetCapability] [TensorRT EP] No graph will run on TensorRT exeuction provider
1: [ FAILED ] GatherNDOpTest.GatherND_slice_double_batch_dims_one_2 (3 ms)
1: [ RUN ] GatherNDOpTest.GatherND_slice_half
1: 2021-07-13 16:08:49.7252292 [E:onnxruntime:GatherND:GatherND, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running GatherND node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:49.7252776 [E:onnxruntime:Default, provider_test_utils.cc:667 onnxruntime::test::OpTester::ExecuteModel] Run failed with status: Non-zero status code returned while running GatherND node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: E:\onnxruntime1.81_TensorRT\onnxruntime\test\providers\provider_test_utils.cc(669): error: Value of: status.IsOK()
1: Actual: false
1: Expected: true
1: Non-zero status code returned while running GatherND node. Name:'node1' Status Message: CUDA error cudaErrorInvalidDeviceFunction:invalid device function
1: 2021-07-13 16:08:49.7261580 [E:onnxruntime:Default, tensorrt_execution_provider.h:51 onnxruntime::TensorrtLogger::log] [2021-07-13 08:08:48 ERROR] INVALID_ARGUMENT: getPluginCreator could not find plugin GatherND version 1
1: 2021-07-13 16:08:49.7262025 [W:onnxruntime:Default, tensorrt_execution_provider.cc:1082 onnxruntime::TensorrtExecutionProvider::GetCapability] [TensorRT EP] No graph will run on TensorRT exeuction provider
1: [ FAILED ] GatherNDOpTest.GatherND_slice_half (3 ms)
1: [ RUN ] GatherNDOpTest.GatherND_batch_dims_of_2
1: 2021-07-13 16:08:49.7268600 [E:onnxruntime:Default, tensorrt_execution_provider.h:51 onnxruntime::TensorrtLogger::log] [2021-07-13 08:08:48 ERROR] INVALID_ARGUMENT: getPluginCreator could not find plugin GatherND version 1
1: 2021-07-13 16:08:49.7269038 [W:onnxruntime:Default, tensorrt_execution_provider.cc:1082 onnxruntime::TensorrtExecutionProvider::GetCapability] [TensorRT EP] No graph will run on TensorRT exeuction provider
1: [ OK ] GatherNDOpTest.GatherND_batch_dims_of_2 (0 ms)
1: [ RUN ] GatherNDOpTest.GatherND_slice_int64_t
1: 2021-07-13 16:08:49.7305967 [E:onnxruntime:Default, tensorrt_execution_provider.h:51 onnxruntime::TensorrtLogger::log] [2021-07-13 08:08:48 ERROR] INVALID_ARGUMENT: getPluginCreator could not find plugin GatherND version 1
1: 2021-07-13 16:08:49.7306459 [W:onnxruntime:Default, tensorrt_execution_provider.cc:1082 onnxruntime::TensorrtExecutionProvider::GetCapability] [TensorRT EP] No graph will run on TensorRT exeuction provider
1: [ OK ] GatherNDOpTest.GatherND_slice_int64_t (3 ms)
1: [----------] 23 tests from GatherNDOpTest (133 ms total)
...
1: [----------] 8 tests from InternalTestingEP
1: [ RUN ] InternalTestingEP.TestSortResultsInSinglePartition
1: [ OK ] InternalTestingEP.TestSortResultsInSinglePartition (9 ms)
1: [ RUN ] InternalTestingEP.TestDependenciesCorrectlyHandled
1: [ OK ] InternalTestingEP.TestDependenciesCorrectlyHandled (2 ms)
1: [ RUN ] InternalTestingEP.TestSaveAndLoadOrtModel
1: 2021-07-13 16:09:14.3100491 [W:onnxruntime:, inference_session.cc:1303 onnxruntime::InferenceSession::Initialize] Serializing optimized model with Graph Optimization level greater than ORT_ENABLE_EXTENDED and the NchwcTransformer enabled. The generated model may contain hardware specific optimizations, and should only be used in the same environment the model was optimized in.
1: [ OK ] InternalTestingEP.TestSaveAndLoadOrtModel (12 ms)
1: [ RUN ] InternalTestingEP.PreventSaveOfModelWithCompiledOps
1: [ OK ] InternalTestingEP.PreventSaveOfModelWithCompiledOps (7 ms)
1: [ RUN ] InternalTestingEP.TestLoadOrtModel
1: [ OK ] InternalTestingEP.TestLoadOrtModel (1 ms)
1: [ RUN ] InternalTestingEP.TestLoadOrtModelWithReducedOpCoverage
1: [ OK ] InternalTestingEP.TestLoadOrtModelWithReducedOpCoverage (1 ms)
1: [ RUN ] InternalTestingEP.TestModelWithSubgraph
1: [ OK ] InternalTestingEP.TestModelWithSubgraph (33 ms)
1: [ RUN ] InternalTestingEP.TestOrtModelWithCompileFailure
1: 2021-07-13 16:09:14.4713613 [E:onnxruntime:Default, graph_partitioner.cc:459 onnxruntime::PartitionOrtFormatModelImpl] EP: InternalTestingExecutionProvider has Compile error: CompileFailureTestExecutionProvider::Compile failed for node: gemm
1: [ OK ] InternalTestingEP.TestOrtModelWithCompileFailure (108 ms)
1: [----------] 8 tests from InternalTestingEP (175 ms total)
1:
1: [----------] 3 tests from RandomTest
1: [ RUN ] RandomTest.RandomSeedTest
1: [ OK ] RandomTest.RandomSeedTest (0 ms)
1: [ RUN ] RandomTest.RandomGeneratorTest
1: [ OK ] RandomTest.RandomGeneratorTest (0 ms)
1: [ RUN ] RandomTest.PhiloxGeneratorTest
1: [ OK ] RandomTest.PhiloxGeneratorTest (0 ms)
1: [----------] 3 tests from RandomTest (0 ms total)
1:
1: [----------] 18 tests from ActivationOpTest
1: [ RUN ] ActivationOpTest.ThresholdedRelu_version_1_to_9
1: 2021-07-13 16:09:14.4751557 [W:onnxruntime:ThresholdedRelu, model.cc:139 onnxruntime::Model::Model] ONNX Runtime only guarantees support for models stamped with opset version 7 or above for opset domain 'ai.onnx'. Please upgrade your model to opset 7 or higher. For now, this opset 1 model may run depending upon legacy support of some older opset version operators.
...
1: [ OK ] LinearRegressorTest/LinearRegressorTest.LinearRegressorUniTarget/4 (0 ms)
1: [ RUN ] LinearRegressorTest/LinearRegressorTest.LinearRegressorUniTarget/5
1: 2021-07-13 16:09:58.2537134 [E:onnxruntime:Default, tensorrt_execution_provider.h:51 onnxruntime::TensorrtLogger::log] [2021-07-13 08:09:56 ERROR] INVALID_ARGUMENT: getPluginCreator could not find plugin LinearRegressor version 1
1: 2021-07-13 16:09:58.2537549 [W:onnxruntime:Default, tensorrt_execution_provider.cc:1082 onnxruntime::TensorrtExecutionProvider::GetCapability] [TensorRT EP] No graph will run on TensorRT exeuction provider
1: [ OK ] LinearRegressorTest/LinearRegressorTest.LinearRegressorUniTarget/5 (0 ms)
1: [ RUN ] LinearRegressorTest/LinearRegressorTest.LinearRegressorUniTarget/6
1: 2021-07-13 16:09:58.2543503 [E:onnxruntime:Default, tensorrt_execution_provider.h:51 onnxruntime::TensorrtLogger::log] [2021-07-13 08:09:56 ERROR] INVALID_ARGUMENT: getPluginCreator could not find plugin LinearRegressor version 1
1: 2021-07-13 16:09:58.2543917 [W:onnxruntime:Default, tensorrt_execution_provider.cc:1082 onnxruntime::TensorrtExecutionProvider::GetCapability] [TensorRT EP] No graph will run on TensorRT exeuction provider
1: [ OK ] LinearRegressorTest/LinearRegressorTest.LinearRegressorUniTarget/6 (0 ms)
1: [ RUN ] LinearRegressorTest/LinearRegressorTest.LinearRegressorUniTarget/7
1: 2021-07-13 16:09:58.2550071 [E:onnxruntime:Default, tensorrt_execution_provider.h:51 onnxruntime::TensorrtLogger::log] [2021-07-13 08:09:56 ERROR] INVALID_ARGUMENT: getPluginCreator could not find plugin LinearRegressor version 1
1: 2021-07-13 16:09:58.2550501 [W:onnxruntime:Default, tensorrt_execution_provider.cc:1082 onnxruntime::TensorrtExecutionProvider::GetCapability] [TensorRT EP] No graph will run on TensorRT exeuction provider
1: [ OK ] LinearRegressorTest/LinearRegressorTest.LinearRegressorUniTarget/7 (0 ms)
1: [----------] 8 tests from LinearRegressorTest/LinearRegressorTest (5 ms total)
1:
1: [----------] Global test environment tear-down
1: [==========] 2846 tests from 215 test suites ran. (247002 ms total)
1: [ PASSED ] 2826 tests.
1: [ SKIPPED ] 10 tests, listed below:
1: [ SKIPPED ] InferenceSessionTests.TestLenientShapeInferencing
1: [ SKIPPED ] SoftmaxOperator.InvalidAxis_opset13
1: [ SKIPPED ] SoftmaxOperator.DimWithZero
1: [ SKIPPED ] ConvTest.Conv1D_Invalid_Input_Shape
1: [ SKIPPED ] ConvTest.Conv2D_Invalid_Input_Shape
1: [ SKIPPED ] TfIdfVectorizerTest.Int32_TF_onlyBigrams_Skip0_Empty_Dim1Fail
1: [ SKIPPED ] TfIdfVectorizerTest.Int32_TF_onlyBigrams_Skip0_Empty_Dim2
1: [ SKIPPED ] TfIdfVectorizerTest.Int32_TF_onlyBigrams_Skip01_Empty_Dim2
1: [ SKIPPED ] TensorOpTest.Unsqueeze_Duplicate
1: [ SKIPPED ] TensorOpTest.Unsqueeze_OutOfRange
1: [ FAILED ] 10 tests, listed below:
1: [ FAILED ] EmbedLayerNormTest.EmbedLayerNormBatch1_Float16
1: [ FAILED ] FastGeluTest.FastGeluWithBiasFloat16
1: [ FAILED ] FastGeluTest.FastGeluWithoutBiasFloat16
1: [ FAILED ] SkipLayerNormTest.SkipLayerNormBatch1_Float16
1: [ FAILED ] CastOpTest.NonStringTypes
1: [ FAILED ] GatherNDOpTest.double
1: [ FAILED ] GatherNDOpTest.GatherND_slice_double_batch_dims_one_1
1: [ FAILED ] GatherNDOpTest.GatherND_slice_double_default_batch_dims
1: [ FAILED ] GatherNDOpTest.GatherND_slice_double_batch_dims_one_2
1: [ FAILED ] GatherNDOpTest.GatherND_slice_half
1:
1: 10 FAILED TESTS
1: YOU HAVE 7 DISABLED TESTS
1:
1/6 Test #1: onnxruntime_test_all ...................***Failed 247.95 sec
test 2
Start 2: onnx_test_pytorch_converted

2: Test command: E:\onnxruntime1.81_TensorRT\build\Windows\Release\Release\onnx_test_runner.exe "E:/onnxruntime1.81_TensorRT/cmake/external/onnx/onnx/backend/test/data/pytorch-converted"
2: Test timeout computed to be: 7200
2: 2021-07-13 16:09:57.5911526 [E:onnxruntime:Default, testcase_driver.cc:39 onnxruntime::test::TestCaseDriver::RunParallel] Running tests in parallel: at most 8 models at any time
2: 2021-07-13 16:09:57.7096358 [E:onnxruntime:Default, testcase_driver.cc:61 onnxruntime::test::TestCaseDriver::RunModelsAsync] Running tests finished. Generating report
2: result:
2: Models: 59
2: Total test cases: 59
2: Succeeded: 59
2: Not implemented: 0
2: Failed: 0
2: Stats by Operator type:
2: Not implemented(0):
2: Failed:
2: Failed Test Cases:
2/6 Test #2: onnx_test_pytorch_converted ............ Passed 0.19 sec
test 3
Start 3: onnx_test_pytorch_operator

3: Test command: E:\onnxruntime1.81_TensorRT\build\Windows\Release\Release\onnx_test_runner.exe "E:/onnxruntime1.81_TensorRT/cmake/external/onnx/onnx/backend/test/data/pytorch-operator"
3: Test timeout computed to be: 7200
3: 2021-07-13 16:09:57.7505446 [E:onnxruntime:Default, testcase_driver.cc:39 onnxruntime::test::TestCaseDriver::RunParallel] Running tests in parallel: at most 8 models at any time
3: 2021-07-13 16:09:57.8387637 [E:onnxruntime:Default, testcase_driver.cc:61 onnxruntime::test::TestCaseDriver::RunModelsAsync] Running tests finished. Generating report
3: result:
3: Models: 24
3: Total test cases: 24
3: Succeeded: 24
3: Not implemented: 0
3: Failed: 0
3: Stats by Operator type:
3: Not implemented(0):
3: Failed:
3: Failed Test Cases:
3/6 Test #3: onnx_test_pytorch_operator ............. Passed 0.13 sec
test 4
Start 4: onnxruntime_shared_lib_test

4: Test command: E:\onnxruntime1.81_TensorRT\build\Windows\Release\Release\onnxruntime_shared_lib_test.exe "--gtest_output=xml:E:\onnxruntime1.81_TensorRT\build\Windows\Release\Release\onnxruntime_shared_lib_test.exe.Release.results.xml"
4: Test timeout computed to be: 7200
4: [==========] Running 49 tests from 3 test suites.
4: [----------] Global test environment set-up.
4: [----------] 43 tests from CApiTest
4: [ RUN ] CApiTest.session_options_graph_optimization_level
4: [ OK ] CApiTest.session_options_graph_optimization_level (0 ms)
4: [ RUN ] CApiTest.run_options
4: [ OK ] CApiTest.run_options (0 ms)
4: [ RUN ] CApiTest.allocation_info
4: [ OK ] CApiTest.allocation_info (0 ms)
4: [ RUN ] CApiTest.DefaultAllocator
4: [ OK ] CApiTest.DefaultAllocator (0 ms)
4: [ RUN ] CApiTest.CreateGetVectorOfMapsInt64Float
4: [ OK ] CApiTest.CreateGetVectorOfMapsInt64Float (0 ms)
4: [ RUN ] CApiTest.CreateGetVectorOfMapsStringFloat
4: [ OK ] CApiTest.CreateGetVectorOfMapsStringFloat (0 ms)
4: [ RUN ] CApiTest.TypeInfoMap
4: [ OK ] CApiTest.TypeInfoMap (0 ms)
4: [ RUN ] CApiTest.CreateGetSeqTensors
4: [ OK ] CApiTest.CreateGetSeqTensors (0 ms)
4: [ RUN ] CApiTest.CreateGetSeqStringTensors
4: [ OK ] CApiTest.CreateGetSeqStringTensors (0 ms)
4: [ RUN ] CApiTest.TypeInfoSequence
4: [ OK ] CApiTest.TypeInfoSequence (0 ms)
4: [ RUN ] CApiTest.model_from_array
4: [ OK ] CApiTest.model_from_array (1417 ms)
4: [ RUN ] CApiTest.dim_param
4: [ OK ] CApiTest.dim_param (5 ms)
4: [ RUN ] CApiTest.custom_op_handler
4: Running custom op inference
4: Running simple inference with cuda provider
4: [ OK ] CApiTest.custom_op_handler (5 ms)
4: [ RUN ] CApiTest.varied_input_custom_op_handler
4: Running simple inference with cuda provider
4: [ OK ] CApiTest.varied_input_custom_op_handler (6 ms)
4: [ RUN ] CApiTest.multiple_varied_input_custom_op_handler
4: [ OK ] CApiTest.multiple_varied_input_custom_op_handler (8 ms)
4: [ RUN ] CApiTest.optional_input_output_custom_op_handler
4: [ OK ] CApiTest.optional_input_output_custom_op_handler (6 ms)
4: [ RUN ] CApiTest.custom_op_with_attributes_handler
4: [ OK ] CApiTest.custom_op_with_attributes_handler (1 ms)
4: [ RUN ] CApiTest.RegisterCustomOpForCPUAndCUDA
4: Tests registration of a custom op of the same name for both CPU and CUDA EPs
4: Running simple inference with cuda provider
4: 2021-07-13 16:09:59.4242993 [W:onnxruntime:Default, schema_registry.cc:78 onnxruntime::OnnxRuntimeOpSchemaRegistry::RegisterOpSchemaInternal] Trying to register schema with name Foo (domain: version: 1) from file custom op registered at runtime line 0, but it is already registered from file custom op registered at runtime line 0
4:
4: [ OK ] CApiTest.RegisterCustomOpForCPUAndCUDA (4 ms)
4: [ RUN ] CApiTest.test_custom_op_library
4: Running inference using custom op shared library
4: Running simple inference with default provider
4: [ OK ] CApiTest.test_custom_op_library (5 ms)
4: [ RUN ] CApiTest.get_allocator_cpu
4: [ OK ] CApiTest.get_allocator_cpu (1 ms)
4: [ RUN ] CApiTest.get_allocator_cuda
4: [ OK ] CApiTest.get_allocator_cuda (4 ms)
4: [ RUN ] CApiTest.io_binding
4: [ OK ] CApiTest.io_binding (1 ms)
4: [ RUN ] CApiTest.io_binding_cuda
4: [ OK ] CApiTest.io_binding_cuda (525 ms)
4: [ RUN ] CApiTest.create_tensor
4: [ OK ] CApiTest.create_tensor (0 ms)
4: [ RUN ] CApiTest.fill_string_tensor
4: [ OK ] CApiTest.fill_string_tensor (0 ms)
4: [ RUN ] CApiTest.get_string_tensor_element
4: [ OK ] CApiTest.get_string_tensor_element (0 ms)
4: [ RUN ] CApiTest.create_tensor_with_data
4: [ OK ] CApiTest.create_tensor_with_data (0 ms)
4: [ RUN ] CApiTest.create_tensor_with_data_float16
4: [ OK ] CApiTest.create_tensor_with_data_float16 (0 ms)
4: [ RUN ] CApiTest.create_tensor_with_data_bfloat16
4: [ OK ] CApiTest.create_tensor_with_data_bfloat16 (0 ms)
4: [ RUN ] CApiTest.access_tensor_data_elements
4: [ OK ] CApiTest.access_tensor_data_elements (0 ms)
4: [ RUN ] CApiTest.override_initializer
4: 2021-07-13 16:09:59.9722955 [W:onnxruntime:, graph.cc:1077 onnxruntime::Graph::Graph] Initializer F1 appears in graph inputs and will not be treated as constant value/weight. This may prevent some of the graph optimizations, like const folding. Move it out of graph inputs if there is no need to override it, by either re-generating the model with latest exporter/converter or with the tool onnxruntime/tools/python/remove_initializer_from_input.py.
4: [ OK ] CApiTest.override_initializer (5 ms)
4: [ RUN ] CApiTest.end_profiling
4: [ OK ] CApiTest.end_profiling (2 ms)
4: [ RUN ] CApiTest.get_profiling_start_time
4: [ OK ] CApiTest.get_profiling_start_time (2 ms)
4: [ RUN ] CApiTest.model_metadata
4: [ OK ] CApiTest.model_metadata (2 ms)
4: [ RUN ] CApiTest.get_available_providers
4: [ OK ] CApiTest.get_available_providers (0 ms)
4: [ RUN ] CApiTest.get_available_providers_cpp
4: [ OK ] CApiTest.get_available_providers_cpp (0 ms)
4: [ RUN ] CApiTest.TestSharedAllocatorUsingCreateAndRegisterAllocator
4: [ OK ] CApiTest.TestSharedAllocatorUsingCreateAndRegisterAllocator (2 ms)
4: [ RUN ] CApiTest.TestSharingOfInitializerAndItsPrepackedVersion
4: [ OK ] CApiTest.TestSharingOfInitializerAndItsPrepackedVersion (2 ms)
4: [ RUN ] CApiTest.TestIncorrectInputTypeToModel_Tensors
4: [ OK ] CApiTest.TestIncorrectInputTypeToModel_Tensors (1 ms)
4: [ RUN ] CApiTest.TestIncorrectInputTypeToModel_SequenceTensors
4: [ OK ] CApiTest.TestIncorrectInputTypeToModel_SequenceTensors (4 ms)
4: [ RUN ] CApiTest.AllocateInitializersFromNonArenaMemory
4: [ OK ] CApiTest.AllocateInitializersFromNonArenaMemory (5 ms)
4: [ RUN ] CApiTest.ConfigureCudaArenaAndDemonstrateMemoryArenaShrinkage
4: [ OK ] CApiTest.ConfigureCudaArenaAndDemonstrateMemoryArenaShrinkage (4 ms)
4: [ RUN ] CApiTest.TestConfigureTensorRTProviderOptions
4: [ OK ] CApiTest.TestConfigureTensorRTProviderOptions (285 ms)
4: [----------] 43 tests from CApiTest (2316 ms total)
4:
4: [----------] 1 test from OrtFormatCustomOpTests
4: [ RUN ] OrtFormatCustomOpTests.ConvertOnnxModelToOrt
4: 2021-07-13 16:10:00.2918838 [W:onnxruntime:CustomOp, inference_session.cc:1303 onnxruntime::InferenceSession::Initialize] Serializing optimized model with Graph Optimization level greater than ORT_ENABLE_EXTENDED and the NchwcTransformer enabled. The generated model may contain hardware specific optimizations, and should only be used in the same environment the model was optimized in.
4: [ OK ] OrtFormatCustomOpTests.ConvertOnnxModelToOrt (11 ms)
4: [----------] 1 test from OrtFormatCustomOpTests (11 ms total)
4:
4: [----------] 5 tests from CApiTestWithProviders/CApiTestWithProvider
4: [ RUN ] CApiTestWithProviders/CApiTestWithProvider.simple/0
4: Running simple inference with default provider
4: [ OK ] CApiTestWithProviders/CApiTestWithProvider.simple/0 (1 ms)
4: [ RUN ] CApiTestWithProviders/CApiTestWithProvider.simple/1
4: Running simple inference with cuda provider
4: [ OK ] CApiTestWithProviders/CApiTestWithProvider.simple/1 (4 ms)
4: [ RUN ] CApiTestWithProviders/CApiTestWithProvider.simple/2
4: [ OK ] CApiTestWithProviders/CApiTestWithProvider.simple/2 (0 ms)
4: [ RUN ] CApiTestWithProviders/CApiTestWithProvider.simple/3
4: [ OK ] CApiTestWithProviders/CApiTestWithProvider.simple/3 (0 ms)
4: [ RUN ] CApiTestWithProviders/CApiTestWithProvider.simple/4
4: Running simple inference with default provider
4: [ OK ] CApiTestWithProviders/CApiTestWithProvider.simple/4 (1 ms)
4: [----------] 5 tests from CApiTestWithProviders/CApiTestWithProvider (7 ms total)
4:
4: [----------] Global test environment tear-down
4: [==========] 49 tests from 3 test suites ran. (2335 ms total)
4: [ PASSED ] 49 tests.
4/6 Test #4: onnxruntime_shared_lib_test ............ Passed 2.84 sec
test 5
Start 5: onnxruntime_global_thread_pools_test

5: Test command: E:\onnxruntime1.81_TensorRT\build\Windows\Release\Release\onnxruntime_global_thread_pools_test.exe "--gtest_output=xml:E:\onnxruntime1.81_TensorRT\build\Windows\Release\Release\onnxruntime_global_thread_pools_test.exe.Release.results.xml"
5: Test timeout computed to be: 7200
5: [==========] Running 15 tests from 1 test suite.
5: [----------] Global test environment set-up.
5: [----------] 15 tests from CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider
5: [ RUN ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple/0
5: [ OK ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple/0 (27 ms)
5: [ RUN ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple/1
5: [ OK ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple/1 (2032 ms)
5: [ RUN ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple/2
5: [ OK ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple/2 (0 ms)
5: [ RUN ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple/3
5: [ OK ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple/3 (0 ms)
5: [ RUN ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple/4
5: [ OK ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple2/0 (40 ms)
5: [ RUN ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple2/1
5: [ OK ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple2/1 (95 ms)
5: [ RUN ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple2/2
5: [ OK ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple2/2 (0 ms)
5: [ RUN ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple2/3
5: [ OK ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple2/3 (0 ms)
5: [ RUN ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple2/4
5: [ OK ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple2/4 (35 ms)
5: [ RUN ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple3/0
5: Running simple inference with default provider
5: Running simple inference with default provider
5: [ OK ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple3/0 (39 ms)
5: [ RUN ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple3/1
5: Running simple inference with cuda provider
5: Running simple inference with cuda provider
5: [ OK ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple3/1 (94 ms)
5: [ RUN ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple3/2
5: [ OK ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple3/2 (0 ms)
5: [ RUN ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple3/3
5: [ OK ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple3/3 (0 ms)
5: [ RUN ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple3/4
5: Running simple inference with default provider
5: [ OK ] CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider.simple3/4 (34 ms)
5: [----------] 15 tests from CApiTestGlobalThreadPoolsWithProviders/CApiTestGlobalThreadPoolsWithProvider (2422 ms total)
5: [----------] Global test environment tear-down
5: [==========] 15 tests from 1 test suite ran. (2422 ms total)
5: [ PASSED ] 15 tests.
5/6 Test #5: onnxruntime_global_thread_pools_test ... Passed 3.22 sec
test 6
Start 6: onnxruntime_api_tests_without_env

6: Test command: E:\onnxruntime1.81_TensorRT\build\Windows\Release\Release\onnxruntime_api_tests_without_env.exe "--gtest_output=xml:E:\onnxruntime1.81_TensorRT\build\Windows\Release\Release\onnxruntime_api_tests_without_env.exe.Release.results.xml"
6: Test timeout computed to be: 7200
6: [==========] Running 1 test from 1 test suite.
6: [----------] Global test environment set-up.
6: [----------] 1 test from TestSessionOptions
6: [ RUN ] TestSessionOptions.SetIntraOpNumThreadsWithoutEnv
6: [ OK ] TestSessionOptions.SetIntraOpNumThreadsWithoutEnv (0 ms)
6: [----------] 1 test from TestSessionOptions (0 ms total)
6:
6: [----------] Global test environment tear-down
6: [==========] 1 test from 1 test suite ran. (0 ms total)
6: [ PASSED ] 1 test.
6/6 Test #6: onnxruntime_api_tests_without_env ...... Passed 0.01 sec

83% tests passed, 1 tests failed out of 6

Total Test time (real) = 254.36 sec

The following tests FAILED:
1 - onnxruntime_test_all (Failed)
Errors while running CTest
Output from these tests are in: E:/onnxruntime1.81_TensorRT/build/Windows/Release/Testing/Temporary/LastTest.log
Use "--rerun-failed --output-on-failure" to re-run the failed cases verbosely.
Traceback (most recent call last):
File "E:\onnxruntime1.81_TensorRT\tools\ci_build\build.py", line 2199, in
sys.exit(main())
File "E:\onnxruntime1.81_TensorRT\tools\ci_build\build.py", line 2126, in main
run_onnxruntime_tests(args, source_dir, ctest_path, build_dir, configs)
File "E:\onnxruntime1.81_TensorRT\tools\ci_build\build.py", line 1475, in run_onnxruntime_tests
run_subprocess(ctest_cmd, cwd=cwd, dll_path=dll_path)
File "E:\onnxruntime1.81_TensorRT\tools\ci_build\build.py", line 593, in run_subprocess
return run(*args, cwd=cwd, capture_stdout=capture_stdout, shell=shell, env=my_env)
File "E:\onnxruntime1.81_TensorRT\tools\python\util\run.py", line 44, in run
env=env, shell=shell)
File "C:\Program Files\Python\Python36\lib\subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['C:\Program Files\CMake\bin\ctest.EXE', '--build-config', 'Release', '--verbose', '--timeout', '7200']' returned non-zero exit status 8.

@chilo-ms
Copy link
Contributor

Could you make sure that both nuget (DLLs) and your application were built for the same architecture? for example, both should be 64-bits or 32-bits.

I would suggest using 64-bits as well as changing to vs2019 since we have tested it under this system environment.

@mrljwlm
Copy link
Author

mrljwlm commented Jul 14, 2021

Could you make sure that both nuget (DLLs) and your application were built for the same architecture? for example, both should be 64-bits or 32-bits.

I would suggest using 64-bits as well as changing to vs2019 since we have tested it under this system environment.

Thank you for your suggestion, I set the architecture of my application to 64-bits, the above error disappear,but another error occured as below:

2021-07-14 16:57:28.0429351 [E:onnxruntime:CSharpOnnxRuntime, tensorrt_execution_provider.h:51 onnxruntime::TensorrtLogger::log] [2021-07-14 08:57:28 ERROR] INVALID_ARGUMENT: getPluginCreator could not find plugin ScatterND version 1

Unhandled Exception: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
at Microsoft.ML.OnnxRuntime.SessionOptions.ReleaseHandle()
at System.Runtime.InteropServices.SafeHandle.InternalFinalize()
at System.Runtime.InteropServices.SafeHandle.Finalize()

@chilo-ms
Copy link
Contributor

From this discussion, it seems TensorRT doesn't support ScatterND yet.
If possible, could you share the model so that we can take a closer look.

@mrljwlm
Copy link
Author

mrljwlm commented Jul 15, 2021

From this discussion, it seems TensorRT doesn't support ScatterND yet.
If possible, could you share the model so that we can take a closer look.

thanks, I use this model exported from yolov5s:
https://raw.githubusercontent.com/mrljwlm/hellow-world/master/yolov5s.rar

UPDATE:
I try another model, an instance segment model called yolact, I run the model through symbolic_shape_infer.py with --auto-merge and I get a different error like these:

2021-07-15 10:30:19.4742416 [E:onnxruntime:CSharpOnnxRuntime, tensorrt_execution_provider.h:51 onnxruntime::TensorrtLogger::log] [2021-07-15 02:30:19 ERROR] E:\onnxruntime1.81_TensorRT\cmake\external\onnx-tensorrt\onnx2trt_utils.cpp:475: Found unsupported datatype (11) when importing initializer: 1029
2021-07-15 10:30:19.4750506 [E:onnxruntime:CSharpOnnxRuntime, tensorrt_execution_provider.h:51 onnxruntime::TensorrtLogger::log] [2021-07-15 02:30:19 ERROR] E:\onnxruntime1.81_TensorRT\cmake\external\onnx-tensorrt\onnx2trt_utils.cpp:475: Found unsupported datatype (11) when importing initializer: 1029
2021-07-15 10:30:19.4759539 [E:onnxruntime:CSharpOnnxRuntime, tensorrt_execution_provider.h:51 onnxruntime::TensorrtLogger::log] [2021-07-15 02:30:19 ERROR] E:\onnxruntime1.81_TensorRT\cmake\external\onnx-tensorrt\onnx2trt_utils.cpp:475: Found unsupported datatype (11) when importing initializer: 1029
2021-07-15 10:30:19.8297725 [W:onnxruntime:CSharpOnnxRuntime, tensorrt_execution_provider.cc:1082 onnxruntime::TensorrtExecutionProvider::GetCapability] [TensorRT EP] No graph will run on TensorRT exeuction provider

Unhandled Exception: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
at Microsoft.ML.OnnxRuntime.SessionOptions.ReleaseHandle()
at System.Runtime.InteropServices.SafeHandle.InternalFinalize()
at System.Runtime.InteropServices.SafeHandle.Finalize()

@snnn
Copy link
Member

snnn commented Mar 4, 2022

" CUDA error cudaErrorInvalidDeviceFunction:invalid device function" These failures are not a problem of TensorRT. They were from our onnx runtime cuda execution provider.

@snnn
Copy link
Member

snnn commented Mar 4, 2022

RTX 3090 has CUDA compute capability of 8.6. The number is not in our CMakeLists.txt. I guess it is the reason why you saw these "invalid device function" error, though I don't understand why CUDA didn't do JIT.

You may add " --cmake_extra_defines CMAKE_CUDA_ARCHITECTURES=86" to the arguments of build.bat.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:TensorRT issues related to TensorRT execution provider
Projects
None yet
Development

No branches or pull requests

5 participants