Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/project/generators (Sourcery refactored) #867

Closed
wants to merge 20 commits into from

Conversation

sourcery-ai[bot]
Copy link
Contributor

@sourcery-ai sourcery-ai bot commented Oct 15, 2021

Pull Request #866 refactored by Sourcery.

Since the original Pull Request was opened as a fork in a contributor's
repository, we are unable to create a Pull Request branching from it.

To incorporate these changes, you can either:

  1. Merge this Pull Request instead of the original, or

  2. Ask your contributor to locally incorporate these commits and push them to
    the original Pull Request

    Incorporate changes via command line
    git fetch https://github.com/glotaran/pyglotaran pull/866/head
    git merge --ff-only FETCH_HEAD
    git push

NOTE: As code is pushed to the original Pull Request, Sourcery will
re-run and update (force-push) this Pull Request with new refactorings as
necessary. If Sourcery finds no refactorings at any point, this Pull Request
will be closed automatically.

See our documentation here.

Run Sourcery locally

Reduce the feedback loop during development by using the Sourcery editor plugin:

Help us improve this pull request!

joernweissenborn and others added 8 commits October 13, 2021 18:26
This is mainly a reminder that we need to properly deprecate the missing attributes.
This bug led to result creation crashing, because of missing labels.

Co-authored-by: Jörn Weißenborn <[email protected]>
It was nice to see how much time and memory the result creation needed compared to the whole optimization, but loading a pickeled OptimizeResult wasn't nice from the start.
Whith the changes in this PR and the overhead to keep benchamarks working across versions, IMHO the extra information about result creation details isn't worth it.
@github-actions
Copy link
Contributor

Binder 👈 Launch a binder notebook on branch glotaran/pyglotaran/sourcery/pull-866

@codecov
Copy link

codecov bot commented Oct 15, 2021

Codecov Report

Merging #867 (999cf2e) into main (d1e36a9) will increase coverage by 0.3%.
The diff coverage is 92.9%.

❗ Current head 999cf2e differs from pull request most recent head 821bd3b. Consider uploading reports for the commit 821bd3b to get more accurate results
Impacted file tree graph

@@           Coverage Diff           @@
##            main    #867     +/-   ##
=======================================
+ Coverage   84.5%   84.9%   +0.3%     
=======================================
  Files         79      86      +7     
  Lines       4522    4681    +159     
  Branches     826     853     +27     
=======================================
+ Hits        3824    3976    +152     
- Misses       556     560      +4     
- Partials     142     145      +3     
Impacted Files Coverage Δ
...n/builtin/megacomplexes/decay/decay_megacomplex.py 79.3% <69.2%> (-3.3%) ⬇️
glotaran/analysis/optimization_group.py 87.6% <78.2%> (ø)
glotaran/project/scheme.py 88.0% <79.1%> (-2.8%) ⬇️
glotaran/project/generators/generator.py 88.8% <88.8%> (ø)
...ltin/megacomplexes/decay/decay_megacomplex_base.py 92.0% <92.0%> (ø)
glotaran/model/model.py 84.9% <92.3%> (+1.7%) ⬆️
glotaran/analysis/optimize.py 86.3% <92.5%> (+1.2%) ⬆️
...analysis/optimization_group_calculator_unlinked.py 94.3% <96.5%> (ø)
...n/analysis/optimization_group_calculator_linked.py 96.5% <96.7%> (ø)
glotaran/analysis/optimization_group_calculator.py 100.0% <100.0%> (ø)
... and 13 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d1e36a9...821bd3b. Read the comment docs.

@github-actions
Copy link
Contributor

github-actions bot commented Oct 15, 2021

Benchmark is done. Checkout the benchmark result page.
Benchmark differences below 5% might be due to CI noise.

Benchmark diff v0.4.1 vs. main

Parametrized benchmark signatures:

BenchmarkOptimize.time_optimize(index_dependent, grouped, weight)

All benchmarks:

       before           after         ratio
     [21ba272a]       [821bd3b1]
     <v0.4.1>                   
+      49.2±0.9ms       56.7±0.5ms     1.15  BenchmarkOptimize.time_optimize(False, False, False)
-         298±3ms         61.7±7ms     0.21  BenchmarkOptimize.time_optimize(False, False, True)
-        68.6±3ms       57.4±0.8ms     0.84  BenchmarkOptimize.time_optimize(False, True, False)
         73.6±5ms         60.8±2ms    ~0.83  BenchmarkOptimize.time_optimize(False, True, True)
+        49.7±2ms         68.7±1ms     1.38  BenchmarkOptimize.time_optimize(True, False, False)
-         302±5ms         73.1±3ms     0.24  BenchmarkOptimize.time_optimize(True, False, True)
         68.3±1ms         70.1±1ms     1.03  BenchmarkOptimize.time_optimize(True, True, False)
         71.4±2ms        91.5±50ms    ~1.28  BenchmarkOptimize.time_optimize(True, True, True)
             193M             193M     1.00  IntegrationTwoDatasets.peakmem_optimize
-      4.63±0.05s       1.82±0.05s     0.39  IntegrationTwoDatasets.time_optimize

Benchmark diff main vs. PR

Parametrized benchmark signatures:

BenchmarkOptimize.time_optimize(index_dependent, grouped, weight)

All benchmarks:

       before           after         ratio
     [d1e36a93]       [821bd3b1]
+      33.3±0.6ms       56.7±0.5ms     1.70  BenchmarkOptimize.time_optimize(False, False, False)
+        38.7±1ms         61.7±7ms     1.60  BenchmarkOptimize.time_optimize(False, False, True)
         56.8±2ms       57.4±0.8ms     1.01  BenchmarkOptimize.time_optimize(False, True, False)
         60.3±1ms         60.8±2ms     1.01  BenchmarkOptimize.time_optimize(False, True, True)
+      43.3±0.9ms         68.7±1ms     1.59  BenchmarkOptimize.time_optimize(True, False, False)
         115±60ms         73.1±3ms    ~0.64  BenchmarkOptimize.time_optimize(True, False, True)
       66.6±0.4ms         70.1±1ms     1.05  BenchmarkOptimize.time_optimize(True, True, False)
        75.3±50ms        91.5±50ms    ~1.21  BenchmarkOptimize.time_optimize(True, True, True)
             193M             193M     1.00  IntegrationTwoDatasets.peakmem_optimize
       1.82±0.05s       1.82±0.05s     1.00  IntegrationTwoDatasets.time_optimize

@sourcery-ai sourcery-ai bot force-pushed the sourcery/pull-866 branch from 3430422 to 3b224eb Compare October 16, 2021 11:52
@sourcery-ai sourcery-ai bot force-pushed the sourcery/pull-866 branch from 3b224eb to 6bc743d Compare October 16, 2021 11:58
@sourcery-ai sourcery-ai bot force-pushed the sourcery/pull-866 branch from 6bc743d to 432b31c Compare October 16, 2021 12:28
@sourcery-ai sourcery-ai bot force-pushed the sourcery/pull-866 branch from 432b31c to b15f921 Compare October 16, 2021 12:54
@sourcery-ai sourcery-ai bot requested a review from a team as a code owner October 16, 2021 12:54
@sourcery-ai sourcery-ai bot force-pushed the sourcery/pull-866 branch from b15f921 to 999cf2e Compare October 16, 2021 13:02
@sourcery-ai sourcery-ai bot force-pushed the sourcery/pull-866 branch from 999cf2e to 821bd3b Compare October 16, 2021 13:07
@sourcery-ai
Copy link
Contributor Author

sourcery-ai bot commented Oct 16, 2021

Sourcery Code Quality Report

❌  Merging this PR will decrease code quality in the affected files by 1.75%.

Quality metrics Before After Change
Complexity 3.72 ⭐ 3.68 ⭐ -0.04 👍
Method Length 74.03 🙂 80.97 🙂 6.94 👎
Working memory 9.18 🙂 9.34 🙂 0.16 👎
Quality 65.36% 🙂 63.61% 🙂 -1.75% 👎
Other metrics Before After Change
Lines 4984 3295 -1689
Changed files Quality Before Quality After Quality Change
benchmark/benchmarks/integration/ex_two_datasets/benchmark.py 85.49% ⭐ 89.66% ⭐ 4.17% 👍
glotaran/analysis/optimize.py 46.86% 😞 40.69% 😞 -6.17% 👎
glotaran/analysis/test/test_constraints.py 57.15% 🙂 57.48% 🙂 0.33% 👍
glotaran/analysis/test/test_grouping.py 50.01% 🙂 49.67% 😞 -0.34% 👎
glotaran/analysis/test/test_optimization.py 34.60% 😞 34.53% 😞 -0.07% 👎
glotaran/analysis/test/test_penalties.py 64.87% 🙂 67.55% 🙂 2.68% 👍
glotaran/analysis/test/test_relations.py 57.85% 🙂 58.25% 🙂 0.40% 👍
glotaran/builtin/io/yml/test/test_save_model.py 94.31% ⭐ 94.31% ⭐ 0.00%
glotaran/builtin/io/yml/test/test_save_scheme.py 78.73% ⭐ 78.73% ⭐ 0.00%
glotaran/builtin/megacomplexes/decay/init.py % % %
glotaran/builtin/megacomplexes/decay/decay_megacomplex.py 70.00% 🙂 86.65% ⭐ 16.65% 👍
glotaran/builtin/megacomplexes/decay/initial_concentration.py 83.93% ⭐ 88.17% ⭐ 4.24% 👍
glotaran/builtin/megacomplexes/decay/k_matrix.py 78.25% ⭐ 79.00% ⭐ 0.75% 👍
glotaran/builtin/megacomplexes/decay/util.py 53.83% 🙂 53.97% 🙂 0.14% 👍
glotaran/builtin/megacomplexes/decay/test/test_decay_megacomplex.py 59.16% 🙂 59.54% 🙂 0.38% 👍
glotaran/builtin/megacomplexes/decay/test/test_k_matrix.py 65.74% 🙂 71.77% 🙂 6.03% 👍
glotaran/deprecation/modules/test/init.py 80.32% ⭐ 75.41% ⭐ -4.91% 👎
glotaran/deprecation/modules/test/test_project_scheme.py 71.23% 🙂 71.69% 🙂 0.46% 👍
glotaran/examples/init.py % % %
glotaran/examples/test/test_example.py 100.00% ⭐ 99.17% ⭐ -0.83% 👎
glotaran/model/init.py % % %
glotaran/model/model.py 73.26% 🙂 72.40% 🙂 -0.86% 👎
glotaran/model/test/test_model.py 70.89% 🙂 72.38% 🙂 1.49% 👍
glotaran/project/scheme.py 89.88% ⭐ 81.10% ⭐ -8.78% 👎
glotaran/project/test/test_result.py 80.28% ⭐ 89.36% ⭐ 9.08% 👍
glotaran/project/test/test_scheme.py 80.82% ⭐ 78.28% ⭐ -2.54% 👎
glotaran/test/test_spectral_decay.py 51.45% 🙂 53.88% 🙂 2.43% 👍
glotaran/test/test_spectral_decay_full_model.py 55.21% 🙂 55.92% 🙂 0.71% 👍
glotaran/test/test_spectral_penalties.py 46.60% 😞 45.01% 😞 -1.59% 👎

Here are some functions in these files that still need a tune-up:

File Function Complexity Length Working Memory Quality Recommendation
glotaran/analysis/optimize.py _create_result 16 🙂 280 ⛔ 26 ⛔ 24.30% ⛔ Try splitting into smaller methods. Extract out complex expressions
glotaran/analysis/test/test_optimization.py test_optimization 19 😞 514 ⛔ 15 😞 25.11% 😞 Refactor to reduce nesting. Try splitting into smaller methods. Extract out complex expressions
glotaran/test/test_spectral_penalties.py test_equal_area_penalties 5 ⭐ 783 ⛔ 17 ⛔ 34.81% 😞 Try splitting into smaller methods. Extract out complex expressions
glotaran/builtin/megacomplexes/decay/util.py calculate_decay_matrix_gaussian_irf 11 🙂 183 😞 18 ⛔ 38.30% 😞 Try splitting into smaller methods. Extract out complex expressions
glotaran/builtin/megacomplexes/decay/util.py retrieve_decay_associated_data 1 ⭐ 255 ⛔ 17 ⛔ 44.04% 😞 Try splitting into smaller methods. Extract out complex expressions

Legend and Explanation

The emojis denote the absolute quality of the code:

  • ⭐ excellent
  • 🙂 good
  • 😞 poor
  • ⛔ very poor

The 👍 and 👎 indicate whether the quality has improved or gotten worse with this pull request.


Please see our documentation here for details on how these metrics are calculated.

We are actively working on this report - lots more documentation and extra metrics to come!

Help us improve this quality report!

@sonarqubecloud
Copy link

SonarCloud Quality Gate failed.    Quality Gate failed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 4 Code Smells

No Coverage information No Coverage information
6.2% 6.2% Duplication

@sourcery-ai sourcery-ai bot closed this Dec 2, 2021
@sourcery-ai sourcery-ai bot deleted the sourcery/pull-866 branch December 2, 2021 13:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants