-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problems converting keypoint RCNN from Detectron2 to TensorRT #2678
Comments
Does it work if fix the input shape and then do the constant folding? |
The keypoint here is how to make the K as an initializer. I think this can be confirm by check the onnx. |
The input shape is fixed to [1344, 1344] as described in the README. I have tried constant folding but then it says that 0 nodes are folded. |
@nvpohanh do we have plan to support k as a tensor for ITopKLayer? |
@rajeevsrao @kevinch-nv to comment on this. |
I have the same issue converting detector networks like FCOS and RetinaNet from Detectron2 -> ONNX -> TensorRT. I have the ONNX models converted with different OPSETS unti opset=17. But when parsing the Onnx model with tensorRT i get the same as above error: [03/01/2023-12:25:04] [E] [TRT] ModelImporter.cpp:726: While parsing node number 547 [TopK -> "/model/TopK_2_output_0"]:
[03/01/2023-12:25:04] [E] [TRT] ModelImporter.cpp:727: --- Begin node ---
[03/01/2023-12:25:04] [E] [TRT] ModelImporter.cpp:728: input: "/model/GatherND_2_output_0"
input: "/model/Reshape_50_output_0"
output: "/model/TopK_2_output_0"
output: "/model/TopK_2_output_1"
name: "/model/TopK_2"
op_type: "TopK"
attribute {
name: "axis"
i: -1
type: INT
}
attribute {
name: "largest"
i: 1
type: INT
}
attribute {
name: "sorted"
i: 1
type: INT
}
[03/01/2023-12:25:04] [E] [TRT] ModelImporter.cpp:729: --- End node ---
[03/01/2023-12:25:04] [E] [TRT] ModelImporter.cpp:731: ERROR: ModelImporter.cpp:168 In function parseGraph:
[6] Invalid Node - /model/TopK_2
This version of TensorRT only supports input K as an initializer. Try applying constant folding on the model using Polygraphy: https://github.com/NVIDIA/TensorRT/tree/master/tools/Polygraphy/examples/cli/surgeon/02_folding_constants
[03/01/2023-12:25:04] [E] Failed to parse onnx file
[03/01/2023-12:25:04] [I] Finish parsing network model
[03/01/2023-12:25:04] [E] Parsing model failed
[03/01/2023-12:25:04] [E] Failed to create engine from model or file.
[03/01/2023-12:25:04] [E] Engine set up failed I have already applied constant folding in an iterative way until no mode nodes can be simplified. What to do in this case? Any help is appreciated. Thank you. |
TRT 8.6 will have dynamic K input for topk, which should be released soon(EA). |
closing since no activity for more than 3 weeks, pls reopen if you still have question, thanks! |
I usually download the correspondig docker container from the NGC but the 8.6 TRT is not available there yet. Will I have to build it myself or Did I miss anything? Please guide. |
@GEngels have you been able to successfully complete the conversion? |
@GEngels @niqbal996 @FilipDrapejkowskiGL Hello, how can I convert this model without using tensorrt 8.6 since Jetson Xavier supports tensorrt 8.5 latest. |
I have been trying to convert the Keypoint Mask-RCNN architecture from Detectron2 to a .trt file. With the suggestions from this issue (#2546) and the readme (https://github.com/NVIDIA/TensorRT/tree/main/samples/python/detectron/README.md) I have been able to succesfully convert the instance segmentation version of the network.
I am trying to use a similar approach for the keypoint model but I am running into a problem.
The current status:
I made some changes to the create_onnx.py to make it suitable for the keypoint model. The changes are only in the second part of the roi_heads function where some names have been changed (mask_pooler -> keypoint _pooler). The final node I am grabbing from is the Resize node which does the resizing after the upsampling. Here is the code with the changes:
`
With this I can create a onnx file that can be converted to a .trt file but one component is missing .... the actual keypoints. In the onnx file i get when using the export_model.py from detectron2 there is a node named "ConstantOfShape_2057" that outputs "xy_preds" which are the keypoints that I need. I have been trying to output from this node in multiple ways but it always ends in with the same error when trying to convert it to .trt, namely:
[02/13/2023-16:17:11] [E] [TRT] ModelImporter.cpp:728: input: "/proposal_generator/Flatten_3_output_0"
input: "/proposal_generator/Reshape_44_output_0"
output: "/proposal_generator/TopK_3_output_0"
output: "/proposal_generator/TopK_3_output_1"
name: "/proposal_generator/TopK_3"
op_type: "TopK"
attribute {
name: "axis"
i: 1
type: INT
}
attribute {
name: "largest"
i: 1
type: INT
}
attribute {
name: "sorted"
i: 1
type: INT
}
[02/13/2023-16:17:11] [E] [TRT] ModelImporter.cpp:729: --- End node ---
[02/13/2023-16:17:11] [E] [TRT] ModelImporter.cpp:732: ERROR: ModelImporter.cpp:168 In function parseGraph:
[6] Invalid Node - /proposal_generator/TopK_3
This version of TensorRT only supports input K as an initializer. Try applying constant folding on the model using Polygraphy: https://github.com/NVIDIA/TensorRT/tree/master/tools/Polygraphy/examples/cli/surgeon/02_folding_constants
I have tried to use folding, hard code the number of keypoints in the "heatmaps_to_keypoints" function from detectron2 which seems to be where the problem lies, but no succes. I saw that this has received some attention quite recently here: facebookresearch/detectron2#4315.
I would like to add the keypoints to the output in somehow but I am lacking some knowledge to get it to work. I have been using this config (https://github.com/facebookresearch/detectron2/blob/main/configs/COCO-Keypoints/keypoint_rcnn_R_50_FPN_3x.yaml) with the weights belonging to it from the model zoo.
The text was updated successfully, but these errors were encountered: