-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes to get FUND working with class-based branch. #40
Conversation
Codecov Report
@@ Coverage Diff @@
## master #40 +/- ##
==========================================
- Coverage 78.75% 78.66% -0.09%
==========================================
Files 39 39
Lines 880 881 +1
==========================================
Hits 693 693
- Misses 187 188 +1
Continue to review full report at Codecov.
|
filename = joinpath(@__DIR__, "../contrib/validation_data_v040/$c-$v.csv") | ||
results = m[c, v] | ||
# load data for comparison | ||
orig_name = c.comp_id.comp_name |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I almost feel it would be cleaner to just rename the csv files to match the new comp name, rather than using this, which seems to go very much into the internals of things?
Have we run a test that makes sure this doesn't change our SCC estimate? |
test/scc_validation.jl
Outdated
|
||
validation_results = load(joinpath(datadir, "deterministic_sc_values.csv")) |> DataFrame | ||
@test all(isapprox.(results[!, :SC], validation_results[!, :SC], atol = 1e-11)) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davidanthoff What do you think of these validation tests? It might be a bit over kill to test all these values, and it takes about ten minutes to run. Also, of the 768 values I tested, 68 of them only pass at a 1e-11 tolerance (the other 700 pass as just ==
). Do you think that means there is something actually different happening with the calculation since I changed the code?
The travis build failed because the tests took to long: "No output has been received in the last 10m0s, this potentially indicates a stalled build or something wrong with the build itself. Not sure what to do about that. But should I changed the test to not tests so many values? I'd like to test different values for all different keyword arguments, but it doesn't have to be every permutation... |
Tests that don't complete in time are not helpful, so better to test a small subset of the values. We can also have more extensive tests that are run manually, leaving the faster test for travis, so we can uncover obvious failures. |
Current set up for validation testing is:
@rjplevin @davidanthoff Is it acceptable that I've just saved datafiles with the values that we are testing against (from version 3.11.7), instead of something fancier like actually setting up an environment that makes the old version of MimiFUND re-run to produce the old values each time? |
842e0ee
to
9994a08
Compare
Yes, I think that is fine, we just want to catch a situation where we accidentally change something! |
These changes should work on the current Mimi master as well as class-based.