-
-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add CI support for benchmarking #1347
Add CI support for benchmarking #1347
Conversation
e1e9ee2
to
bc43346
Compare
ba0093d
to
c5d0a88
Compare
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## main #1347 +/- ##
==========================================
+ Coverage 74.27% 74.35% +0.07%
==========================================
Files 175 177 +2
Lines 48887 49046 +159
Branches 10375 10379 +4
==========================================
+ Hits 36312 36468 +156
- Misses 10282 10285 +3
Partials 2293 2293
|
Should we at least have a benchmark that covers each backend to monitor compilation time? |
Yes, definitely. We need to start adding benchmarks for a lot of things. I'm planning on repurposing some of those old Theano tests (i.e. the ones that are actually just examples) into benchmark tests. We'll need to do that across a few follow-up PRs, but, in the meantime, it looks like we'll need to merge this so we can more easily test some of its functionality (e.g. comparisons of saved perf. data and new changes in a PR, saving to a separate perf. stats. site, the GitHub comment functionality, etc.) |
c5d0a88
to
557e43e
Compare
I've added some settings that I hope will prevent We might need to merge this just to see if it works on the push events. After that, we can open a test PR with a failing benchmark test and see if the PR alert/failure works. |
557e43e
to
496854d
Compare
496854d
to
6d2031d
Compare
This PR adds support for automated benchmarking via
pytest-benchmarking
andgithub-action-benchmark
.pytest-benchmarking
worked well enough in my other project (i.e. http://github.com/pythological/unification/), so I figured we should try it here as well.N.B. This is needed for the new Numba and JAX benchmarks we're in the process of adding.
pytest-benchmarking
fixtures.A couple of old
Scan
tests were converted to benchmarks, and Numba and JAX benchmarks were added for alogsumexp
graph using different input sizes.Closes #718