-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor steps in blending code #453
base: master
Are you sure you want to change the base?
Conversation
…4 for the refactoring
Co-authored-by: mats-knmi <[email protected]>
….velocity_perturbations = [] in __initialize_random_generators
…xed seed assingments
* Use seed for all rng to make a test run completely deterministic * fix probmatching test and some copy paste oversights * Add test for vel_pert_method * Change the test so that it actually runs the lines that need to be covered
…workers at the same time
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #453 +/- ##
==========================================
+ Coverage 84.26% 84.31% +0.05%
==========================================
Files 160 160
Lines 13067 13250 +183
==========================================
+ Hits 11011 11172 +161
- Misses 2056 2078 +22
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Both visual tests and assert statements give the same conclusion: the output is exactly the same! |
@dnerini The codecov check is failing but the only things it is actuary failing on in the deepcopys I have done in the code and on the timing which does not seem worth it to write tests for as this is a very basic subtraction. Would it be able to disable it for these things or how does this work? I know you where able to do it for the re-factoring of the steps nowcasting. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @sidekock and @mats-knmi,
Thanks a lot for your work here! I'm sorry I was not around the past days to answer some questions and think along. However, this looks great! With the large number of changes, it is becoming difficult to check everything thoroughly, so at least I'm happy to see that it gives exactly the same results. I think a to do for a new PR could be updating the documentation, but that is probably easier and cleaner once this PR is pushed.
To further reduce memory usage, both this array and the ``velocity_models`` array | ||
can be given as float32. They will then be converted to float64 before computations | ||
to minimize loss in precision. | ||
# TODO: compare old and new version of the code, run a benchmark to compare the two |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this TODO still necessary? I can imagine we'd like to have this done prior to merging this PR.
@sidekock we can simply ignore the failing check and merge whenever you feel it's ready |
See #443