Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can csmobilenetv2 use TensorRT to accelerate inference? #29

Open
Damon0626 opened this issue May 22, 2020 · 1 comment
Open

Can csmobilenetv2 use TensorRT to accelerate inference? #29

Damon0626 opened this issue May 22, 2020 · 1 comment

Comments

@Damon0626
Copy link

I have trained my own model csmobilenetv2(backbone) and yolov3_tiny(head) and got the best final weights. Now I want to accelerate the inference with TensorRT which verision is 5.1.6.1. But got errors when translating the weights model to onnx model. Useing yolov3.weights yolov3.cfg yolov3-tiny.weights yolov3-tiny.cfg are ok.

tensorrt:5.1.6.1
python:3.6.9
onnx:1.4.1
numpy:1.18.4

Traceback (most recent call last):
File "yolov3_to_onnx.py", line 840, in
main()
File "yolov3_to_onnx.py", line 827, in main
verbose=True)
File "yolov3_to_onnx.py", line 447, in build_onnx_graph
params)
File "yolov3_to_onnx.py", line 322, in load_conv_weights
conv_params, 'conv', 'weights')
File "yolov3_to_onnx.py", line 351, in _create_param_tensors
conv_params, param_category, suffix)
File "yolov3_to_onnx.py", line 383, in _load_one_param_type
buffer=self.weights_file.read(param_size * 4))
TypeError: buffer is too small for requested array

@ngocneo
Copy link

ngocneo commented May 11, 2021

I have same problem,Do you have any solution ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants