-
Notifications
You must be signed in to change notification settings - Fork 530
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add PyNumeroEvaluationError
#2901
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think your proposed (long-term) plan where the low-level functions return error information and the NLP classes raise the exception is a good one.
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## main #2901 +/- ##
=======================================
Coverage 87.45% 87.45%
=======================================
Files 770 771 +1
Lines 89609 89617 +8
=======================================
+ Hits 78371 78378 +7
- Misses 11238 11239 +1
Flags with carried forward coverage won't be shown. Click here to find out more.
☔ View full report in Codecov by Sentry. |
PyNumeroEvaluationError
PyNumeroEvaluationError
I think this is ready (pending any other reviews). I'm not adding a return object from the low-level evaluation functions at this point, but the tests I've added are high-level enough that they should not need to change if/when we do. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this looks good and I like the wrapping. I have one suggestion that may or may not be in your wheelhouse, so feel free to ignore.
Fixes #2899
Partially.
Summary/Motivation:
To handle evaluation errors in PyNumero solver interfaces currently, we would have to catch
AssertionErrors
, which seems dangerous.This is WIP for now as I haven't written any tests.
Changes proposed in this PR:
PyNumeroEvaluationError
in a newpyomo.contrib.pynumero.exceptions
moduleThis error is currently raised in the
AmplInterface
functions, at the same location the old asserts happened. We could imagine a different API where the low-level evaluation functions return someEvaluationResult
object that contains error information, then it is theNLP
's responsibility to raise the exception (currently theAmplInterface
evaluation methods don't return anything). This would make it easier to return useful debugging information to the NLP, which has access to the Pyomo variables and constraints needed to display useful information. I've punted on trying to do anything like this for now, but don't think the current design precludes doing this in the future.Legal Acknowledgement
By contributing to this software project, I have read the contribution guide and agree to the following terms and conditions for my contribution: