-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🩹 Fix Performance Regressions #740
🩹 Fix Performance Regressions #740
Conversation
Codecov Report
@@ Coverage Diff @@
## staging #740 +/- ##
=======================================
Coverage 80.1% 80.2%
=======================================
Files 70 70
Lines 3904 3950 +46
Branches 676 692 +16
=======================================
+ Hits 3130 3169 +39
- Misses 653 660 +7
Partials 121 121
Continue to review full report at Codecov.
|
|
to benchmark/pytest/analysis/test_problem.py Use 'pytest benchmark/pytest/' to run the benchmarks
acc295e
to
bd05752
Compare
3876441
to
20a021a
Compare
20a021a
to
bc4e951
Compare
A note on the spectral model validation Results for paramGUI case 05 Compare the estimated parameters from pyglotaran
with those from paramGUI as recored in specFitSummary.txt:
Also note the similarity in plotted results: paramGUI vs pyglotaran By compatible I mean that the units of the spectral model parameters are the same as the spectral axis of the dataset (in this case both For now the results are good enough to merge into staging, at least from this perspective. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Taking note of my comment w.r.t the spectral model validation, this PR is good enough to merge to staging,
pending possibly some more detailed general comments by @s-weigand.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additional issues I found.
pyglotaran/glotaran/analysis/util.py
Line 44 in cf16572
indices: dict[str, int] | None, |
indices
should have type dict[str, int]
not dict[str, int] | None
, since the mega complexes in builtin
have that signature and don't defend against None
.
Right, i will fix that. |
Co-authored-by: Sebastian Weigand <[email protected]>
Co-authored-by: Sebastian Weigand <[email protected]>
Co-authored-by: Sebastian Weigand <[email protected]>
Sourcery Code Quality Report✅ Merging this PR will increase code quality in the affected files by 0.14%.
Here are some functions in these files that still need a tune-up:
Legend and ExplanationThe emojis denote the absolute quality of the code:
The 👍 and 👎 indicate whether the quality has improved or gotten worse with this pull request. Please see our documentation here for details on how these metrics are calculated. We are actively working on this report - lots more documentation and extra metrics to come! Help us improve this quality report! |
Kudos, SonarCloud Quality Gate passed!
|
Last benchmark before review suggestions got applies
|
Benchmark is done. Checkout the benchmark result page. Benchmark diffParametrized benchmark signatures: BenchmarkOptimize.time_optimize(index_dependent, grouped, weight)
|
* Added benchmark for Problem class * removed print * ♻️ 'Refactored by Sourcery' * 🧹 Moved glotaran/analysis/test/test_relations.py to benchmark/pytest/analysis/test_problem.py Use 'pytest benchmark/pytest/' to run the benchmarks * Numerous performance tweaks * Don't weight data with ones if no weight supplied * Fic duplicate call to create result_dataset * Removed dead code * Switched back to pure numpy in problem * Fixed example * Cleanunp * Update glotaran/analysis/problem_ungrouped.py Co-authored-by: Sebastian Weigand <[email protected]> * Update glotaran/analysis/problem_ungrouped.py Co-authored-by: Sebastian Weigand <[email protected]> * Update glotaran/analysis/util.py Co-authored-by: Sebastian Weigand <[email protected]> Co-authored-by: Sourcery AI <> Co-authored-by: s-weigand <[email protected]>
* Added benchmark for Problem class * removed print * ♻️ 'Refactored by Sourcery' * 🧹 Moved glotaran/analysis/test/test_relations.py to benchmark/pytest/analysis/test_problem.py Use 'pytest benchmark/pytest/' to run the benchmarks * Numerous performance tweaks * Don't weight data with ones if no weight supplied * Fic duplicate call to create result_dataset * Removed dead code * Switched back to pure numpy in problem * Fixed example * Cleanunp * Update glotaran/analysis/problem_ungrouped.py Co-authored-by: Sebastian Weigand <[email protected]> * Update glotaran/analysis/problem_ungrouped.py Co-authored-by: Sebastian Weigand <[email protected]> * Update glotaran/analysis/util.py Co-authored-by: Sebastian Weigand <[email protected]> Co-authored-by: Sourcery AI <> Co-authored-by: s-weigand <[email protected]>
* Added benchmark for Problem class * removed print * ♻️ 'Refactored by Sourcery' * 🧹 Moved glotaran/analysis/test/test_relations.py to benchmark/pytest/analysis/test_problem.py Use 'pytest benchmark/pytest/' to run the benchmarks * Numerous performance tweaks * Don't weight data with ones if no weight supplied * Fic duplicate call to create result_dataset * Removed dead code * Switched back to pure numpy in problem * Fixed example * Cleanunp * Update glotaran/analysis/problem_ungrouped.py Co-authored-by: Sebastian Weigand <[email protected]> * Update glotaran/analysis/problem_ungrouped.py Co-authored-by: Sebastian Weigand <[email protected]> * Update glotaran/analysis/util.py Co-authored-by: Sebastian Weigand <[email protected]> Co-authored-by: Sourcery AI <> Co-authored-by: s-weigand <[email protected]>
This PR Addresses performance regressions introduce in staging branch.
analysis.Problem
class*Adds numerous performance tweaks
Checklist
Closes issues
None