Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why is the default "stride" of common.DetectMultiBackend set to 64 #7351

Closed
1 task done
MarineCao opened this issue Apr 9, 2022 · 4 comments · Fixed by #7353 or #7342
Closed
1 task done

Why is the default "stride" of common.DetectMultiBackend set to 64 #7351

MarineCao opened this issue Apr 9, 2022 · 4 comments · Fixed by #7353 or #7342
Labels
question Further information is requested

Comments

@MarineCao
Copy link

MarineCao commented Apr 9, 2022

Search before asking

Question

Hi, I'm trying to run detect.py with my onnx model, while I find that the image size is changed from (416, 416) to (448,448) by the check_img_size function, since model.stride=64. However, my model is a P5 model. Then, I find that the default stride is set to 64 in DetectMultiBackend as follows:
stride, names = 64, [f'class{i}' for i in range(1000)] # assign defaults
If the input is a .pt model, the stride will be changed by
stride = max(int(model.stride.max()), 32) # model stride
while, if the input is a .onnx model, the stride will never be changed. Then, if I exported the onnx mdoel by python export.py --device 0 --img 416, during inference, I got this error:

onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: images for the following indices
 index: 2 Got: 448 Expected: 416
 index: 3 Got: 448 Expected: 416
 Please fix either the inputs or the model.

Process finished with exit code 1

Of course, if I use --dynamic to export .onnx model, the detect.py will run without error, but the image size is still changed to (448,448) and the inference speed is lower. And the the output anchors will be misplaced.
If I modify the default stride to 32, then all the outputs are correct.
So I'm confused about the default setting of stride. Is this a bug? Or, is there a bug in my code?

Environment:
Win11 + anaconda + cuda 11.0 + python 3.8 + pytorch 1.7.1 + yolov5-6.0

Additional

No response

@MarineCao MarineCao added the question Further information is requested label Apr 9, 2022
@github-actions
Copy link
Contributor

github-actions bot commented Apr 9, 2022

👋 Hello @MarineCao, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email [email protected].

Requirements

Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

@MarineCao thanks for the question! Some model formats (like PyTorch) can access attached metadata (like stride) to dynamically select the minimum viable stride, while other formats (like ONNX) do not have any metadata and thus lack stride information, in which case stride 64 is assumed since it is valid for both P5 models (minimum stride 32) and P6 models (minimum stride 64).

The only alternative here is to run an image and compare the input to the output to empirically attempt a stride determination at inference time, which would add delay on DetectMultiBackend init. I'll think this over a bit and see what we can do here.

@glenn-jocher glenn-jocher linked a pull request Apr 9, 2022 that will close this issue
@glenn-jocher
Copy link
Member

glenn-jocher commented Apr 9, 2022

@MarineCao good news 😃! Your original issue may now be fixed ✅ in PR #7353. This PR attaches stride and names as ONNX model metadata which are then read and used during inference.

To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

@MarineCao
Copy link
Author

@glenn-jocher Thanks for your quick reply. It works for me!

@glenn-jocher glenn-jocher linked a pull request Apr 10, 2022 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants