Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RTDETR Deployment #95

Open
lyuwenyu opened this issue Oct 22, 2023 · 5 comments
Open

RTDETR Deployment #95

lyuwenyu opened this issue Oct 22, 2023 · 5 comments

Comments

@lyuwenyu
Copy link
Owner

lyuwenyu commented Oct 22, 2023

@lyuwenyu lyuwenyu changed the title Deployment RTDETR Deployment Oct 22, 2023
@lyuwenyu lyuwenyu pinned this issue Oct 23, 2023
@rydenisbak
Copy link

rydenisbak commented Oct 26, 2023

Hello, thank you for the great work, I really fan of RT-DETR.

RT-DETR also can be deployed via mmdeploy with TensorRT and onnxruntime backends.
Openvino is not supperted natively, but I use
OpenVINOExecutionProvider
and ort.GraphOptimizationLevel.ORT_DISABLE_ALL for onnxruntime.
This settings helps me a little bit improve performance on cpu

@rpatidar
Copy link

rpatidar commented Nov 1, 2023

Adding the example of running the trtinfer.py , the blog have details code as well for onnx as well https://zhuanlan.zhihu.com/p/657506252

# Missing import 
import pycuda.driver as cuda 

# New Import 
import cv2 


cuda.init()
device_ctx = cuda.Device(0).make_context()
mpath="../rtdetr_pytorch/rtdetr_r101vd_6x_coco_from_paddle.trt"
image_file="../rtdetr_pytorch/demo.jpg"
model = TRTInference(mpath, backend='cuda')
img = cv2.imread(image_file)

im_shape = np.array([[float(img.shape[0]), float(img.shape[1])]]).astype('float32')
size = np.array([[640,640]])
size = np.ascontiguousarray(size).astype(np.int32)
blob = {"images" : np.ascontiguousarray(im_shape), "orig_target_sizes": np.ascontiguousarray(size).astype(np.int32)}

res = model(blob)
print(res)
device_ctx.pop()

@shreejalt
Copy link

Hello, thank you for the great work, I really fan of RT-DETR.

RT-DETR also can be deployed via mmdeploy with TensorRT and onnxruntime backends. Openvino is not supperted natively, but I use OpenVINOExecutionProvider and ort.GraphOptimizationLevel.ORT_DISABLE_ALL for onnxruntime. This settings helps me a little bit improve performance on cpu

Can you please tell the command or the changes that we need to do if we want to use mmdeploy?

This was referenced Feb 22, 2024
@myalos
Copy link

myalos commented Mar 18, 2024

Can RTDETR be deployed on rk3568 ?

@PrinceP
Copy link

PrinceP commented Jul 30, 2024

RT-DETR C++ Tensorrt implementation for V1 and V2

https://github.com/PrinceP/tensorrt-cpp-for-onnx?tab=readme-ov-file#rt-detr

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants