-
Notifications
You must be signed in to change notification settings - Fork 649
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
convert TensorRT model failed #784
Comments
Hi, @gzxy-0102 please update mmdeploy to the latest as we just fixed a bug of converting yolox to tensorrt. |
I am using the latest version |
Just to make sure, #758 was merged 20 hours ago. And it fixed the bug. |
ennnn it's git master branch? I am using 0.6.0 release |
Yes, just fixed hours ago. The bug is only triggered for some versions of TensorRT and 8.4.1.5 is in the list. |
ok got it |
Where did you import the
|
ennn |
I see. It is from the SDK of MMDeploy. Would you please show the contents of |
This is the log information I get when I call the sdk
Here are all the files inside the working directoryMy operating system is windows 11 |
Please use |
Using the latest version but still getting an error
|
You need build MMDeploy from source. There is also another quick methods if you want to use prebuilt MMDeploy package. stride_w = shift_xx.new_full((shift_xx.shape[0], ),
stride_w).to(dtype)
stride_h = shift_xx.new_full((shift_yy.shape[0], ),
stride_h).to(dtype) to stride_w = shift_xx.new_full((feat_h*feat_w, ),
stride_w).to(dtype)
stride_h = shift_xx.new_full((feat_h*feat_w, ),
stride_h).to(dtype) |
I just compiled MMDeploy from the source code of the master branch |
As the warning of your log says |
I will check the compile options and log files |
Cool |
when i recompile mmdeploy i got an error it says can't find loader.cpp.in but loader.cpp.in file exists in cmake directory
|
@irexyc Hi, could you kindly give some help? |
The problem is due to these line: https://github.com/open-mmlab/mmdeploy/blob/master/cmake/MMDeploy.cmake#L153-L157 What is your cmake version? It seems like your cmake doesn't detect the right ${CMAKE_CURRENT_FUNCTION_LIST_DIR} |
oh~ my bad |
And the var in list should seperate by ;
|
@AllentDan Hi. I was rebuild, it still gives the same error
|
there is the new env check info
|
is it the reason why pplcv is not installed? I just saw that pplcv needs to be installed |
Please check if file exists: |
no mmdeploy_tensorrt_ops.dll in the directory
|
Please use And you should seperate var by comma like "cpu;cuda" |
the CUDNN_DIR option can i use cudatoolkit install directory,the cudatoolkit has cudnn in the directory |
it not works @irexyc the CUDA_PATH is my cuda and cudnn install directory
|
I probably know the reason. Maybe I didn't choose vs when installing CUDA, which led to enable_ The language method cannot find the compiled file of CUDA Because c++ doesn't know much, it's troublesome for you |
Q: the CUDNN_DIR option can i use cudatoolkit install directory,the cudatoolkit has cudnn in the directory Q: The enable language cuda error: The path may different on your PC. If there are multi version like v160, v150, it's better to copy to both. Another link you can refer to |
I have solved this problem by reinstalling CUDA but how to use the compiled SDK? now i add the directory of pyd file and others dll file to the environment variable to import, it always shows that the model is loaded successfully, but it does not continue to execute the detector
|
I don't konw If you set the path, but you can build the sdk with |
Describe the bug
使用代码调用模型转换,将pytorch模型转换为tensorRT模型时执行失败
Using code to call model conversion, the execution fails when converting a pytorch model to a tensorRT model
运行流程为fastapi接收到模型转换请求 下发到huey队列 huey队列代码与deploy.py代码基本一致
The running process is that fastapi receives the model conversion request and sends it to the huey queue. The huey queue code is basically the same as the deploy.py code.
当我放弃tensorRT转为onnx后 可以正常转换 但是在实例化Detector后 接口层被阻塞在那里 并没有往下执行
When I gave up tensorRT and switched to onnx, I could convert normally, but after instantiating the Detector, the interface layer was blocked there and did not go down.
也没有错误信息 除了几个config的info日志 并没有任何其他的反馈
There is no error message, except for a few config info logs and no other feedback
Reproduction
我没有运行命令去执行模型转换 而是通过代码调起deploy代码去执行转换 实际上和执行命令转换没有什么区别 可以尝试以下列命令复现
I did not run the command to perform the model transformation, but invoked the deploy code to perform the transformation through the code. In fact, it is no different from executing the command transformation. You can try the following command to reproduce
Environment
Error traceback
这是pytorch转tensorRT模型的日志
This is the log of pytorch to tensorRT model
The text was updated successfully, but these errors were encountered: