-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support ONNX runtime in RunInference API #24911
Conversation
Codecov Report
@@ Coverage Diff @@
## master #24911 +/- ##
==========================================
- Coverage 72.95% 72.84% -0.12%
==========================================
Files 745 748 +3
Lines 99174 99361 +187
==========================================
+ Hits 72356 72378 +22
- Misses 25453 25617 +164
- Partials 1365 1366 +1
Flags with carried forward coverage won't be shown. Click here to find out more.
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
One question: Do we need GPUs to run the ONNX unit tests? I have very little about ONNX but I will have a read about it during the review. |
In the code I did specify ONNX provider in a way (as a priority list) such that if run on NVIDIA GPUs they should run the gpu version but on cpu it should run the cpu version. But I ran my tests on CPU - if we want to make sure this works properly in a GPU environment (and which type of GPU also matters) then I need to test in those environments too. |
Hi, once the PR is ready for another review, comment on this with |
PTAL @AnandInguva [I'm still working on the gradle onnxtest (if that is the right direction to go), but in the meantime could you let me know how to put data at gcp location for integration test? I think maybe that should come before the gradle part.] |
Hi, |
…ed gcp-dependent tests)
For apache-beam-samples/run_inference, I am able to read but not write, getting the error below: However, it seems like the other tests use gs://apache-beam-ml/models/ and gs://apache-beam-ml/datasets. For these I do not have read access. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most of this looks good to me.
I would clean up the lint, formatting errors for the checks to go green and we can start finalizing the PR.
Hi -thanks. I fixed the formatting errors but seems like the 2 remaining failures are about files/tests that should not be touched by this PR?
Would be great if you could give some pointers regarding these, thanks! |
you can ignore codecov and the other error is not relevant to this PR and in general...could be flaky. |
@ziqi-ma Hi, is it ready for the final review? I confirm the tests are running for onnx. Thanks again for the changes. Also, can you add this feature to the CHANGES.md defined at https://github.com/apache/beam/blob/master/CHANGES.md? Something like Line 57 in 64e40d2
I would like to get this before Feb 22nd so that it can be included in next release 2.46.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one minor nit, LGTM
Since you've already added an example, please add a section to README on how to run it. You can add it in a new PR as well if you'd like. |
sdks/python/apache_beam/examples/inference/onnx_sentiment_classification.py
Show resolved
Hide resolved
LGTM - we can merge once Ritesh's comments are responded to |
Added |
Hi - I have fixed all comments. Think this is ready to merge. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is awesome, thank you!
retest this please |
(above comment was for the CI bot) - I'm going to let the precommit checks run to completion to make sure they are passing, then I will merge |
Thanks @ziqi-ma! |
It breaks Python PostCommits
see #25443 for details |
Please add a meaningful description for your change here
Addresses #22972
Adding onnx support to beam. ONNX Runtime is a popular framework to accelerate ML inference:
https://onnxruntime.ai/. It supports a broad range of model frameworks including PyTorch, Tensorflow/Keras, TFLite, scikit-learn, etc.. This PR lets you call an onnx model from beam by passing in model path.
Specifically:
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123
), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>
instead.CHANGES.md
with noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI.