Skip to content

Cannot inference with deployed TensorRT engine on Jetson AGX Xavier. #1513

Answered by lvhan028
tung18tht asked this question in Q&A
Discussion options

You must be logged in to vote

Please read https://github.com/open-mmlab/mmdeploy/blob/master/docs/en/get_started.md

build/bin/object_detection is implemented on MMDeploy Inference SDK.
SDK performs model inference with not only the backend engine file but also meta info.

How to get meta info?
Use --dump-info when converting an OpenMMLab model by tools/deploy.py.
The generated meta info, as well as the backend engine file, are located in the directory specified by --work-dir
They make up what we call MMDeploy Model.

After exporting MMDeploy Model, pass the path specified by --work-dir to build/bin/object_detection

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by tung18tht
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants