Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Validation of results using tests #52

Closed
ckaldemeyer opened this issue Jan 13, 2016 · 17 comments
Closed

Validation of results using tests #52

ckaldemeyer opened this issue Jan 13, 2016 · 17 comments

Comments

@ckaldemeyer
Copy link
Member

Within the development-phase of renpass-gis, results have to be validated. Since this is a general procedure for most models, the idea was to implement this using tests. If suitable, these tests can then be implemented into oemof_base afterwards.

Here are some quick ideas to start with (any further suggestions are welcome):

  • Bus balances: check if zero for all timesteps (regarding a small deviation "epsilon")
  • Transformers
    • Check if outputs divided through their efficiency equal their input for all timesteps
    • Check if full load hours are between 0 and 8760
  • Storages
    • Check storage balance over time (sum(inflow)>sum(outflow))
    • Check if SOC(t=0) is SOC(t=t_max)
  • Merit-Order-Checks
    • Check if order sort(marginal_prices) equals order of sort(full load hours)
    • Merit-Order plots (price vs. quantity): Check if all transformers on the right side of the required quantity (demand) are zero

@oemof/oemof-main : To you have any suggestions where to implement this? Using nose directly in the test folder?

@simonhilpert : Did I forget something?
@uvchik : I remember you telling me that you already had testing procedures at the RLI. Feel free to contribute!

@uvchik
Copy link
Member

uvchik commented Jan 15, 2016

I would differ between programming test and system validation tests. For me there are three kinds of test:

1.) The normal nose test should be very fast because you execute them quite often to check if the program itself works. The optimisation itself should not be part of these tests.

2.) Validation test to check if an optimisation still gets the same results after changing the code. These test should be done before a merge or at least before we will release. There should be a handful of representative models.

3.) Plausibility test to check the results of a new optimisation. They are optional but a good help if the models become more and more complicated.

I think your suggestions are of the third kind.

@uvchik
Copy link
Member

uvchik commented Mar 3, 2016

Maybe it is to much for the restructuring meeting but we should adopt a roadmap there. Shall we change the milestone to that meeting?

@cswh
Copy link
Contributor

cswh commented Mar 3, 2016

perhaps it would be better suited for the oemof developer meeting in may. what do you think?

@cswh cswh modified the milestones: March 2016 Release, February 2016 release Mar 3, 2016
@uvchik
Copy link
Member

uvchik commented Mar 4, 2016

It is an important part of the quality management and people already broke other peoples app by pushing untested code. But it will be a matter of time, so let's see.

@uvchik
Copy link
Member

uvchik commented Mar 24, 2016

I would shift that topic to the re-factoring meeting.

@uvchik
Copy link
Member

uvchik commented May 12, 2016

@ckaldemeyer Do you think this is issue is covered by PR #160 and #154? Otherwise you should specify the remaining topics.

@uvchik
Copy link
Member

uvchik commented Jul 11, 2016

@ckaldemeyer : Sorry I misunderstood the main point. I guess the main point is to implement plausibility tests. Do you think it should be a part of solph or a part of the outputlib?

@uvchik uvchik closed this as completed Jul 11, 2016
@uvchik
Copy link
Member

uvchik commented Jul 11, 2016

Sorry, just hit the wrong button 😄

@uvchik uvchik reopened this Jul 11, 2016
@ckaldemeyer
Copy link
Member Author

@ckaldemeyer : Sorry I misunderstood the main point. I guess the main point is to implement plausibility tests. Do you think it should be a part of solph or a part of the outputlib?

Another idea: would a method in the EnergySystem class make sense?

If not, I would tend to do it in solph!

@uvchik
Copy link
Member

uvchik commented Jul 11, 2016

I think it depends on your approach but if it is part of the EnergySystem it should work for oemof in general. So think implementing it within solph will be easier.

@ckaldemeyer
Copy link
Member Author

I think it depends on your approach but if it is part of the EnergySystem it should work for oemof in general. So think implementing it within solph will be easier.

👍

@ckaldemeyer
Copy link
Member Author

Maybe we can shedule this to the release after the refactoring-release as I won't be able to commit myself before September..

@ckaldemeyer
Copy link
Member Author

I guess this can be closed, or not?

@ckaldemeyer
Copy link
Member Author

Or at least be split in two kinds of tests

@ckaldemeyer
Copy link
Member Author

I gues this can be closed, or not?

@uvchik
Copy link
Member

uvchik commented Mar 7, 2017

We do have code tests, example test.

We do not have plausibility tests and as I understand it this was your main concern according to your first comment.

The ideas of you first comment are quite interesting. Maybe we should add wiki to save these ideas but make it possible to close ancient issues.

https://github.com/oemof/oemof/wiki/ideas

@uvchik
Copy link
Member

uvchik commented Mar 10, 2017

@oemof/oemof-developer-group Please keep it in mind that there is a wiki like this.

@uvchik uvchik closed this as completed Mar 10, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants