Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changes to get FUND working with class-based branch. #40

Merged
merged 7 commits into from
Oct 4, 2019

Conversation

rjplevin
Copy link
Collaborator

These changes should work on the current Mimi master as well as class-based.

@codecov-io
Copy link

codecov-io commented Sep 30, 2019

Codecov Report

Merging #40 into master will decrease coverage by 0.08%.
The diff coverage is 97.36%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #40      +/-   ##
==========================================
- Coverage   78.75%   78.66%   -0.09%     
==========================================
  Files          39       39              
  Lines         880      881       +1     
==========================================
  Hits          693      693              
- Misses        187      188       +1
Impacted Files Coverage Δ
src/MimiFUND.jl 100% <100%> (ø) ⬆️
src/new_marginaldamages.jl 92.1% <93.33%> (-1.23%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1a4bf35...06be575. Read the comment docs.

filename = joinpath(@__DIR__, "../contrib/validation_data_v040/$c-$v.csv")
results = m[c, v]
# load data for comparison
orig_name = c.comp_id.comp_name
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I almost feel it would be cleaner to just rename the csv files to match the new comp name, rather than using this, which seems to go very much into the internals of things?

@davidanthoff
Copy link
Member

Have we run a test that makes sure this doesn't change our SCC estimate?


validation_results = load(joinpath(datadir, "deterministic_sc_values.csv")) |> DataFrame
@test all(isapprox.(results[!, :SC], validation_results[!, :SC], atol = 1e-11))

Copy link
Collaborator

@corakingdon corakingdon Oct 3, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@davidanthoff What do you think of these validation tests? It might be a bit over kill to test all these values, and it takes about ten minutes to run. Also, of the 768 values I tested, 68 of them only pass at a 1e-11 tolerance (the other 700 pass as just ==). Do you think that means there is something actually different happening with the calculation since I changed the code?

@corakingdon
Copy link
Collaborator

The travis build failed because the tests took to long:

"No output has been received in the last 10m0s, this potentially indicates a stalled build or something wrong with the build itself.
Check the details on how to adjust your build configuration on: https://docs.travis-ci.com/user/common-build-problems/#build-times-out-because-no-output-was-received
The build has been terminated"

Not sure what to do about that. But should I changed the test to not tests so many values? I'd like to test different values for all different keyword arguments, but it doesn't have to be every permutation...

@rjplevin
Copy link
Collaborator Author

rjplevin commented Oct 3, 2019

Tests that don't complete in time are not helpful, so better to test a small subset of the values. We can also have more extensive tests that are run manually, leaving the faster test for travis, so we can uncover obvious failures.

@corakingdon
Copy link
Collaborator

Current set up for validation testing is:

  • there is a new directory "test/SC validation data/" that has two files for deterministic and MCS validation. These values were saved from running the configuration in "test/scc_validation_full.jl" on MimiFUND v3.11.7.
  • "test/scc_validation_full.jl" is not deployed by Travis, but can be run manually to test all pre-saved values
  • I added lines to "test/runtests.jl" to do validation tests of four possible configurations, and to validate a small (n=25) MCS run with the same seed.

@rjplevin @davidanthoff Is it acceptable that I've just saved datafiles with the values that we are testing against (from version 3.11.7), instead of something fancier like actually setting up an environment that makes the old version of MimiFUND re-run to produce the old values each time?

@davidanthoff
Copy link
Member

Is it acceptable that I've just saved datafiles with the values that we are testing against (from version 3.11.7)

Yes, I think that is fine, we just want to catch a situation where we accidentally change something!

@davidanthoff davidanthoff merged commit d8e54f5 into master Oct 4, 2019
@davidanthoff davidanthoff deleted the composite-fixes branch October 4, 2019 17:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants