Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add CI support for benchmarking #1347

Merged

Conversation

brandonwillard
Copy link
Member

@brandonwillard brandonwillard commented Dec 9, 2022

This PR adds support for automated benchmarking via pytest-benchmarking and github-action-benchmark. pytest-benchmarking worked well enough in my other project (i.e. http://github.com/pythological/unification/), so I figured we should try it here as well.

N.B. This is needed for the new Numba and JAX benchmarks we're in the process of adding.

  • Refactor existing benchmark tests to use the pytest-benchmarking fixtures.
    A couple of old Scan tests were converted to benchmarks, and Numba and JAX benchmarks were added for a logsumexp graph using different input sizes.

Closes #718

@brandonwillard brandonwillard added CI Involves continuous integration performance concern testing labels Dec 9, 2022
@brandonwillard brandonwillard marked this pull request as draft December 9, 2022 22:06
@brandonwillard brandonwillard force-pushed the add-pytest-benchmarking branch from e1e9ee2 to bc43346 Compare December 9, 2022 22:10
@brandonwillard brandonwillard self-assigned this Dec 9, 2022
@brandonwillard brandonwillard force-pushed the add-pytest-benchmarking branch 2 times, most recently from ba0093d to c5d0a88 Compare December 9, 2022 22:24
@codecov
Copy link

codecov bot commented Dec 9, 2022

Codecov Report

Merging #1347 (6d2031d) into main (ae182f0) will increase coverage by 0.07%.
The diff coverage is 99.42%.

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #1347      +/-   ##
==========================================
+ Coverage   74.27%   74.35%   +0.07%     
==========================================
  Files         175      177       +2     
  Lines       48887    49046     +159     
  Branches    10375    10379       +4     
==========================================
+ Hits        36312    36468     +156     
- Misses      10282    10285       +3     
  Partials     2293     2293              
Impacted Files Coverage Δ
aesara/compile/mode.py 84.47% <ø> (ø)
aesara/tensor/random/rewriting/basic.py 94.16% <ø> (ø)
aesara/link/jax/dispatch/shape.py 88.46% <93.33%> (+1.97%) ⬆️
aesara/link/jax/dispatch/random.py 100.00% <100.00%> (+2.63%) ⬆️
aesara/tensor/random/rewriting/__init__.py 100.00% <100.00%> (ø)
aesara/tensor/random/rewriting/jax.py 100.00% <100.00%> (ø)
aesara/tensor/rewriting/math.py 86.02% <0.00%> (-0.20%) ⬇️

@brandonwillard brandonwillard marked this pull request as ready for review December 10, 2022 00:21
@rlouf
Copy link
Member

rlouf commented Dec 10, 2022

Should we at least have a benchmark that covers each backend to monitor compilation time?

@brandonwillard
Copy link
Member Author

Should we at least have a benchmark that covers each backend to monitor compilation time?

Yes, definitely. We need to start adding benchmarks for a lot of things. I'm planning on repurposing some of those old Theano tests (i.e. the ones that are actually just examples) into benchmark tests.

We'll need to do that across a few follow-up PRs, but, in the meantime, it looks like we'll need to merge this so we can more easily test some of its functionality (e.g. comparisons of saved perf. data and new changes in a PR, saving to a separate perf. stats. site, the GitHub comment functionality, etc.)

rlouf
rlouf previously approved these changes Dec 10, 2022
@brandonwillard
Copy link
Member Author

brandonwillard commented Dec 11, 2022

I've added some settings that I hope will prevent benchmark-action from breaking on PRs, while still comparing benchmarks from main and PR branches and failing the workflow when they're past a threshold.

We might need to merge this just to see if it works on the push events. After that, we can open a test PR with a failing benchmark test and see if the PR alert/failure works.

@brandonwillard brandonwillard force-pushed the add-pytest-benchmarking branch from 557e43e to 496854d Compare December 11, 2022 21:17
@brandonwillard brandonwillard force-pushed the add-pytest-benchmarking branch from 496854d to 6d2031d Compare December 11, 2022 21:17
@brandonwillard brandonwillard merged commit 62728bb into aesara-devs:main Dec 12, 2022
@brandonwillard brandonwillard deleted the add-pytest-benchmarking branch December 12, 2022 17:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CI Involves continuous integration performance concern testing
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add benchmark reporting to CI
2 participants