-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor single stage #317
Conversation
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## rj/single_stage_PR #317 +/- ##
======================================================
- Coverage 90.78% 90.37% -0.41%
======================================================
Files 63 63
Lines 10719 10724 +5
======================================================
- Hits 9731 9692 -39
- Misses 988 1032 +44
Flags with carried forward coverage won't be shown. Click here to find out more.
☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks much cleaner than the previous approach. Besides the left out comments, this looks very good to me.
#from simsopt.util.mpi import log | ||
#log() | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should these comments stay?
@mbkumar are you ok with merging this into rj/single_stage_PR and then master? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This implementation is tidy and looks good.
@@ -1639,7 +1639,6 @@ class TempOptimizable(Optimizable): | |||
def __init__(self, func, *args, dof_indicators=None, **kwargs): | |||
|
|||
self.func = func | |||
args = np.ravel(args) # The user may pass a tuple or list |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this deleted?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that line was present only in the rj/single_stage_PR branch, and not in the master branch. Therefore, its removal makes ml/single_stage_PR even simpler.
In #301, we've discussed how it is awkward that
finite_difference.py
had to be modified to manipulate thenon_dofs
attribute, which most Optimizable objects do not have. This PR shows one way things could be refactored to avoid this, sonon_dofs
is never mentioned. Here the changes tofinite_difference.py
andoptimizable.py
in #301 are reverted to master, and instead theMPIFiniteDifference
class is extended to allow extra data besides.x
to be broadcast among the worker groups. The examplesingle_stage_optimization_finite_beta.py
is refactored correspondingly. This example gives identical values for all quantities at the end of the optimization and uses fewer calls toVirtualCasing.from_vmec()
in this branch compared to #301.I'm not wedded to the approach here, but wanted to put it on the table. @rogeriojorge @mbkumar what do you think?