-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IIfConditionalOutputLayer inputs must have the same shape. #2846
Comments
Currently we only support this mode. @jhalakp-nvidia do we have plan to improve it? also I didn't see this document in our api doc, I think we can improve it. |
Sorry we have no plan to support |
Hi, @dreamkwc, I am facing the same issue. Could you figure out which "if else" branch in the detectron2 source code is causing this issue? Thanks, |
@caruofc I just faced the same issue exporting an onnx model created with anomalib , the squeeze function in pytorch implies an if statement, meaning remove dimension if that shape element is equal to 1. I modified the code just to index for that case to [:,0,:] which basically squeezes that. Hope that is similar for detectron2 |
@cbenitez81, Thank you so much for your reply. Your solution makes sense. It would be a great help if you could assist me in identifying the squeeze block where I need to make the changes in the detectron2 source code. I tried to put breakpoints in every squeeze function call in the detectron2 code but none of them were hit when I tried to create the ONNX from the weight file .pth. Not sure how to find the code segment that needs modification. Please help. Thanks |
@caruofc sorry for the delay and i haven't tested this but from your message you see the following |
@cbenitez81, I finally could solve my issues. This link https://github.com/NVIDIA/TensorRT/tree/release/8.6/samples/python/detectron2 describes how to get around those with a sample mask rcnn model. Basically, I had to modify the the "create_onnx.py" sample script to create NMS node for EfficientNMS_TRT plugin and replace the output, create PyramidROIAlign_TRT plugin to replace the ROIAligns, fold the constants and modify the Reshape nodes of my onnx model. Once the onnx model is converted using the modified "create_onnx.py", I could generate the engine file without any issues. Thank you for your help and providing me with your valuable feedback. |
hey @caruofc, could u please let me know in details what u did? |
Description
I want to export the model vitdet in detectron2 to TRT. I first export the model use pytorch.onnx.export and got an error like this:
Then, I used onnxsim to simplify the onnx model. this error was solved but another error appeared.
I don't know what I should do. Any suggestion?
Environment
I used the docker image nvidia/cuda:11.7.1-cudnn8-devel-ubuntu20.04.
For tensorrt, I used the pip install method.
pip install tensorrt
And, three other nvidia packages were installed.
nvidia-cublas-cu12: 12.1.0.26
nvidia-cuda-runtime-cu12: 12.1.55
nvidia-cudnn-cu12: 8.8.1.3
TensorRT Version: 8.6.0
NVIDIA GPU: 3080
NVIDIA Driver Version:525.105.17
CUDA Version: 11.7.1
CUDNN Version: cudnn8
Operating System: ubuntu20.04
Python Version (if applicable): Python 3.8.10
Tensorflow Version (if applicable):
PyTorch Version (if applicable): 1.13.1+cu117
Baremetal or Container (if so, version):
Other packages:
detectron2: 0.6
onnx: 1.13.1
onnxruntime-gpu: 1.14.1
onnxsim: 0.4.17
Steps To Reproduce
export to onnx. The model weight can download from detectron2 github. I used the COCO Mask R-CNN ViTDet, ViT-B
export to engine
The text was updated successfully, but these errors were encountered: