Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🩹 Add number_of_clps to result and correct degrees_of_freedom calculation #1249

Merged
merged 7 commits into from
Feb 20, 2023

Conversation

s-weigand
Copy link
Member

@s-weigand s-weigand commented Feb 19, 2023

This change adds number_of_clps to the MatrixProvider's, OptimizationGroup and the Result classes.
When degrees of freedom are calculated the number_of_clps are subtracted.

In addition number_of_data_points was deprecated from Result since we found that what we actually report is number_of_residuals which is the more interesting value anyway since it also contains the penalties.
So following a discussion @ism200 we won't report the number of datapoints anymore.

Change summary

Checklist

  • ✔️ Passing the tests (mandatory for all PR's)
  • 🚧 Added changes to changelog (mandatory for all PR's)
  • 👌 Closes issue (mandatory for ✨ feature and 🩹 bug fix PR's)
  • 🧪 Adds new tests for the feature (mandatory for ✨ feature and 🩹 bug fix PR's)

Closes issues

closes #1086

@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Feb 19, 2023

Sourcery Code Quality Report

❌  Merging this PR will decrease code quality in the affected files by 0.10%.

Quality metrics Before After Change
Complexity 4.30 ⭐ 4.27 ⭐ -0.03 👍
Method Length 94.67 🙂 93.20 🙂 -1.47 👍
Working memory 9.12 🙂 9.12 🙂 0.00
Quality 61.74% 🙂 61.64% 🙂 -0.10% 👎
Other metrics Before After Change
Lines 2046 2176 130
Changed files Quality Before Quality After Quality Change
glotaran/builtin/megacomplexes/test/test_spectral_decay_full_model.py 53.91% 🙂 51.00% 🙂 -2.91% 👎
glotaran/optimization/matrix_provider.py 62.58% 🙂 63.53% 🙂 0.95% 👍
glotaran/optimization/optimization_group.py 59.23% 🙂 60.35% 🙂 1.12% 👍
glotaran/optimization/optimizer.py 68.00% 🙂 67.64% 🙂 -0.36% 👎
glotaran/optimization/test/suites.py 41.92% 😞 41.92% 😞 0.00%
glotaran/optimization/test/test_constraints.py 65.98% 🙂 61.94% 🙂 -4.04% 👎
glotaran/optimization/test/test_matrix_provider.py 61.13% 🙂 59.86% 🙂 -1.27% 👎
glotaran/optimization/test/test_penalties.py 61.47% 🙂 58.87% 🙂 -2.60% 👎
glotaran/optimization/test/test_relations.py 62.43% 🙂 60.45% 🙂 -1.98% 👎
glotaran/project/result.py 71.68% 🙂 72.47% 🙂 0.79% 👍

Here are some functions in these files that still need a tune-up:

File Function Complexity Length Working Memory Quality Recommendation
glotaran/project/result.py Result.markdown 10 🙂 256 ⛔ 14 😞 38.66% 😞 Try splitting into smaller methods. Extract out complex expressions
glotaran/optimization/optimization_group.py OptimizationGroup.create_result_data 12 🙂 276 ⛔ 12 😞 38.95% 😞 Try splitting into smaller methods. Extract out complex expressions
glotaran/optimization/matrix_provider.py MatrixProvider.combine_megacomplex_matrices 19 😞 203 😞 11 😞 39.11% 😞 Refactor to reduce nesting. Try splitting into smaller methods. Extract out complex expressions
glotaran/optimization/matrix_provider.py MatrixProvider.apply_relations 14 🙂 181 😞 13 😞 41.51% 😞 Try splitting into smaller methods. Extract out complex expressions
glotaran/optimization/matrix_provider.py MatrixProviderLinked.align_matrices 13 🙂 179 😞 12 😞 44.11% 😞 Try splitting into smaller methods. Extract out complex expressions

Legend and Explanation

The emojis denote the absolute quality of the code:

  • ⭐ excellent
  • 🙂 good
  • 😞 poor
  • ⛔ very poor

The 👍 and 👎 indicate whether the quality has improved or gotten worse with this pull request.


Please see our documentation here for details on how these metrics are calculated.

We are actively working on this report - lots more documentation and extra metrics to come!

Help us improve this quality report!

@github-actions
Copy link
Contributor

Binder 👈 Launch a binder notebook on branch s-weigand/pyglotaran/fix-nr-clp

@github-actions
Copy link
Contributor

github-actions bot commented Feb 19, 2023

Benchmark is done. Checkout the benchmark result page.
Benchmark differences below 5% might be due to CI noise.

Benchmark diff v0.6.0 vs. main

Parametrized benchmark signatures:

BenchmarkOptimize.time_optimize(index_dependent, grouped, weight)

All benchmarks:

       before           after         ratio
     [6c3c390e]       [be140b83]
     <v0.6.0>                   
!      61.7±0.6ms           failed      n/a  BenchmarkOptimize.time_optimize(False, False, False)
!       67.8±30ms           failed      n/a  BenchmarkOptimize.time_optimize(False, False, True)
!        62.4±2ms           failed      n/a  BenchmarkOptimize.time_optimize(False, True, False)
!        135±40ms           failed      n/a  BenchmarkOptimize.time_optimize(False, True, True)
!        81.9±3ms           failed      n/a  BenchmarkOptimize.time_optimize(True, False, False)
!        117±40ms           failed      n/a  BenchmarkOptimize.time_optimize(True, False, True)
!        81.2±4ms           failed      n/a  BenchmarkOptimize.time_optimize(True, True, False)
!        105±30ms           failed      n/a  BenchmarkOptimize.time_optimize(True, True, True)
             201M             208M     1.03  IntegrationTwoDatasets.peakmem_optimize
-      2.33±0.02s       1.55±0.05s     0.67  IntegrationTwoDatasets.time_optimize

Benchmark diff main vs. PR

Parametrized benchmark signatures:

BenchmarkOptimize.time_optimize(index_dependent, grouped, weight)

All benchmarks:

       before           after         ratio
     [be140b83]       [086a4977]
           failed           failed      n/a  BenchmarkOptimize.time_optimize(False, False, False)
           failed           failed      n/a  BenchmarkOptimize.time_optimize(False, False, True)
           failed           failed      n/a  BenchmarkOptimize.time_optimize(False, True, False)
           failed           failed      n/a  BenchmarkOptimize.time_optimize(False, True, True)
           failed           failed      n/a  BenchmarkOptimize.time_optimize(True, False, False)
           failed           failed      n/a  BenchmarkOptimize.time_optimize(True, False, True)
           failed           failed      n/a  BenchmarkOptimize.time_optimize(True, True, False)
           failed           failed      n/a  BenchmarkOptimize.time_optimize(True, True, True)
             208M             208M     1.00  IntegrationTwoDatasets.peakmem_optimize
       1.55±0.05s       1.55±0.06s     1.00  IntegrationTwoDatasets.time_optimize

@codecov
Copy link

codecov bot commented Feb 19, 2023

Codecov Report

Base: 88.1% // Head: 88.3% // Increases project coverage by +0.2% 🎉

Coverage data is based on head (086a497) compared to base (be140b8).
Patch coverage: 100.0% of modified lines in pull request are covered.

Additional details and impacted files
@@           Coverage Diff           @@
##            main   #1249     +/-   ##
=======================================
+ Coverage   88.1%   88.3%   +0.2%     
=======================================
  Files        104     104             
  Lines       5064    5092     +28     
  Branches     842     847      +5     
=======================================
+ Hits        4462    4499     +37     
+ Misses       484     477      -7     
+ Partials     118     116      -2     
Impacted Files Coverage Δ
glotaran/optimization/matrix_provider.py 96.1% <100.0%> (+2.0%) ⬆️
glotaran/optimization/optimization_group.py 95.5% <100.0%> (+0.1%) ⬆️
glotaran/optimization/optimizer.py 81.6% <100.0%> (+0.1%) ⬆️
glotaran/project/result.py 91.4% <100.0%> (+0.4%) ⬆️
glotaran/optimization/estimation_provider.py 91.2% <0.0%> (+0.5%) ⬆️
glotaran/model/interval_item.py 68.4% <0.0%> (+15.7%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

Copy link
Member

@joernweissenborn joernweissenborn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

jsnel
jsnel previously approved these changes Feb 19, 2023
Copy link
Member

@jsnel jsnel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, from my side. The unit tests I requested have been added, thanks.

Copy link
Collaborator

@ism200 ism200 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks a lot for this !

@ism200
Copy link
Collaborator

ism200 commented Feb 20, 2023

thanks a lot for this !

@ism200 ism200 closed this Feb 20, 2023
@s-weigand s-weigand reopened this Feb 20, 2023
@sonarqubecloud
Copy link

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
0.0% 0.0% Duplication

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

🐛 small corrections of the Optimization result table
4 participants