-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Return Infinity if simulation terminates early #99
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## develop #99 +/- ##
===========================================
+ Coverage 93.69% 93.73% +0.03%
===========================================
Files 28 28
Lines 952 973 +21
===========================================
+ Hits 892 912 +20
- Misses 60 61 +1 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice one @NicolaCourtier! I've added a few comments.
Have you thought edge-cases for this? In certain design optimisation scenarios, it might be benefitical to return a smaller array (fast-charging comes to mind). There are other issues to solve before this would become the hold-up, but it's probably worth thinking of.
This looks really good @NicolaCourtier. I think it actually solves #102 as well! I'd like to add a couple things on #102 to help the CMAES assertion, but they don't appear to be as needed. For testing this functionality, it would be good to add tests that push the optimiser outside of a stable solution. For reference, I've found setting negative active material fraction below ~0.4 in the SPM/e default parameter set can reliably crash the solver. See here: PyBOP/tests/unit/test_parameterisations.py Line 157 in 2c7e744
Happy for this to be merged once the coverage is passing 👍 |
I agree with your point above- there will be many edge cases where this is not the appropriate fix, but for the two Costs that we have now (RootMeanSquaredError and SumSquaredError) where there is a target to match, I think it gives predictable behaviour. I think the downside would be that any dataset which spends time at eiher voltage limit could be very hard for the optimiser to find, because parameter sets that marginally break the limits will return an infinite cost even though they could be quite close to the optimal values. The first suggestion (extending the output array - there would be several choices we could make here) would give the optimiser more information in such scenarios and therefore possibly better performance in such edge cases. |
tests/unit/test_cost.py
Outdated
cost = pybop.SumSquaredError(problem) | ||
cost([0.5]) | ||
# Test type of returned value | ||
assert type(rmse_cost([0.5])) == np.float64 or np.isinf(rmse_cost([0.5])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this work?
assert type(rmse_cost([0.5])) == np.float64 or np.isinf(rmse_cost([0.5])) | |
list = [np.float64, np.inf] | |
assert type(cost([0.5])) in list |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is what I tried first, but I receive:
FAILED tests/unit/test_cost.py::TestCosts::test_costs[3.777] - AssertionError: assert <class 'float'> in [<class 'numpy.float64'>, inf]
Any idea?
Co-authored-by: Brady Planden <[email protected]>
Co-authored-by: Brady Planden <[email protected]>
A resolution to issue #98.