Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🩹 Show validation problem if parameters are missing values (default: NaN) #1076

Merged
merged 8 commits into from
May 20, 2022

Conversation

s-weigand
Copy link
Member

@s-weigand s-weigand commented May 20, 2022

This change fixes false positive reporting of a Model with parameters or Scheme as valid when parameters are missing values and default to np.nan.
One use case where a user would want missing parameters is in a teaching exercise where students have to fill in missing parameters.
The other use case would be that a user started editing the parameters file but didn't finish and forgot some values.

As a demo you can use this notebook.

Change summary

Checklist

  • ✔️ Passing the tests (mandatory for all PR's)
  • 🚧 Added changes to changelog (mandatory for all PR's)
  • 🧪 Adds new tests for the feature (mandatory for ✨ feature and 🩹 bug fix PR's)

@github-actions
Copy link
Contributor

Binder 👈 Launch a binder notebook on branch s-weigand/pyglotaran/better-validation

@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented May 20, 2022

Sourcery Code Quality Report

❌  Merging this PR will decrease code quality in the affected files by 0.06%.

Quality metrics Before After Change
Complexity 3.48 ⭐ 3.45 ⭐ -0.03 👍
Method Length 47.55 ⭐ 47.86 ⭐ 0.31 👎
Working memory 6.65 🙂 6.68 🙂 0.03 👎
Quality 75.06% 75.00% 🙂 -0.06% 👎
Other metrics Before After Change
Lines 2851 2948 97
Changed files Quality Before Quality After Quality Change
glotaran/model/model.py 68.87% 🙂 67.94% 🙂 -0.93% 👎
glotaran/model/test/test_model.py 72.80% 🙂 73.09% 🙂 0.29% 👍
glotaran/parameter/parameter.py 84.11% ⭐ 84.10% ⭐ -0.01% 👎
glotaran/parameter/parameter_group.py 75.52% ⭐ 75.88% ⭐ 0.36% 👍
glotaran/parameter/test/test_parameter_group.py 74.07% 🙂 74.31% 🙂 0.24% 👍
glotaran/project/scheme.py 72.08% 🙂 71.96% 🙂 -0.12% 👎

Here are some functions in these files that still need a tune-up:

File Function Complexity Length Working Memory Quality Recommendation
glotaran/parameter/parameter.py Parameter.markdown 13 🙂 161 😞 13 😞 44.22% 😞 Try splitting into smaller methods. Extract out complex expressions
glotaran/model/model.py Model.markdown 13 🙂 168 😞 10 😞 48.74% 😞 Try splitting into smaller methods. Extract out complex expressions
glotaran/project/scheme.py Scheme.__post_init__ 16 🙂 127 😞 11 😞 48.75% 😞 Try splitting into smaller methods. Extract out complex expressions
glotaran/model/model.py Model.from_dict 12 🙂 159 😞 10 😞 50.49% 🙂 Try splitting into smaller methods. Extract out complex expressions
glotaran/model/model.py Model.generate_parameters 25 😞 106 🙂 8 🙂 51.29% 🙂 Refactor to reduce nesting

Legend and Explanation

The emojis denote the absolute quality of the code:

  • ⭐ excellent
  • 🙂 good
  • 😞 poor
  • ⛔ very poor

The 👍 and 👎 indicate whether the quality has improved or gotten worse with this pull request.


Please see our documentation here for details on how these metrics are calculated.

We are actively working on this report - lots more documentation and extra metrics to come!

Help us improve this quality report!

@sonarqubecloud
Copy link

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
0.7% 0.7% Duplication

@github-actions
Copy link
Contributor

github-actions bot commented May 20, 2022

Benchmark is done. Checkout the benchmark result page.
Benchmark differences below 5% might be due to CI noise.

Benchmark diff v0.5.1 vs. main

Parametrized benchmark signatures:

BenchmarkOptimize.time_optimize(index_dependent, grouped, weight)

All benchmarks:

       before           after         ratio
     [96b42630]       [f9c55a55]
     <v0.5.1>                   
         63.8±1ms         67.5±4ms     1.06  BenchmarkOptimize.time_optimize(False, False, False)
         145±20ms         149±20ms     1.03  BenchmarkOptimize.time_optimize(False, False, True)
         62.2±2ms         66.8±6ms     1.07  BenchmarkOptimize.time_optimize(False, True, False)
          149±5ms         133±20ms    ~0.89  BenchmarkOptimize.time_optimize(False, True, True)
         81.1±3ms         86.1±6ms     1.06  BenchmarkOptimize.time_optimize(True, False, False)
       82.0±0.5ms        86.3±40ms     1.05  BenchmarkOptimize.time_optimize(True, False, True)
         77.9±2ms       75.8±0.9ms     0.97  BenchmarkOptimize.time_optimize(True, True, False)
        87.2±40ms         121±50ms    ~1.38  BenchmarkOptimize.time_optimize(True, True, True)
             202M             211M     1.04  IntegrationTwoDatasets.peakmem_optimize
       2.05±0.07s       1.95±0.04s     0.95  IntegrationTwoDatasets.time_optimize

Benchmark diff main vs. PR

Parametrized benchmark signatures:

BenchmarkOptimize.time_optimize(index_dependent, grouped, weight)

All benchmarks:

       before           after         ratio
     [0da383ee]       [f9c55a55]
         63.3±2ms         67.5±4ms     1.07  BenchmarkOptimize.time_optimize(False, False, False)
         124±20ms         149±20ms    ~1.21  BenchmarkOptimize.time_optimize(False, False, True)
         63.0±1ms         66.8±6ms     1.06  BenchmarkOptimize.time_optimize(False, True, False)
          149±6ms         133±20ms    ~0.89  BenchmarkOptimize.time_optimize(False, True, True)
         78.3±1ms         86.1±6ms     1.10  BenchmarkOptimize.time_optimize(True, False, False)
         86.7±5ms        86.3±40ms     1.00  BenchmarkOptimize.time_optimize(True, False, True)
         81.3±2ms       75.8±0.9ms     0.93  BenchmarkOptimize.time_optimize(True, True, False)
         105±50ms         121±50ms    ~1.15  BenchmarkOptimize.time_optimize(True, True, True)
             206M             211M     1.02  IntegrationTwoDatasets.peakmem_optimize
       1.92±0.08s       1.95±0.04s     1.01  IntegrationTwoDatasets.time_optimize

@codecov
Copy link

codecov bot commented May 20, 2022

Codecov Report

Attention: Patch coverage is 83.33333% with 4 lines in your changes missing coverage. Please review.

Project coverage is 87.3%. Comparing base (0da383e) to head (f9c55a5).
Report is 209 commits behind head on main.

Files Patch % Lines
glotaran/model/model.py 77.7% 2 Missing ⚠️
glotaran/parameter/parameter_group.py 83.3% 2 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##            main   #1076     +/-   ##
=======================================
+ Coverage   87.1%   87.3%   +0.1%     
=======================================
  Files        102     102             
  Lines       5381    5388      +7     
  Branches     995     997      +2     
=======================================
+ Hits        4691    4704     +13     
+ Misses       536     530      -6     
  Partials     154     154             

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Member

@jsnel jsnel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice user experience improvement!
Good tests and thanks for beautifying the code here and there.

@jsnel jsnel merged commit 114da78 into glotaran:main May 20, 2022
@s-weigand s-weigand deleted the better-validation branch May 21, 2022 13:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants