-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Develop an interface for Bayesian library (BUMPS) to call GSAS-II for forward model calculation via scripting #76
Comments
I took a look at the docs. I can see there is a lot there. There are at least two ways that I could imagine interfacing BUMPS and GSAS-II. One way would be to have BUMPS supply a structure & a pattern to GSAS-II, which would provide a computed pattern back, possibly after optimizing some experimental parameters (such as the scale factor). This would be done by having BUMPS execute a very short set of Python commands. This all exists now, see simulation example or fitting example, but I'd want to revisit the coding in GSAS-II to have it suspend itself between optimization cycles and wait for revised input rather than what is done now, where a new Python session and diffraction data plus the full structure is read on each optimization cycle. The other way would be to for GSAS-II use BUMPS as a minimization engine. This would be done by having BUMPS expose a routine that GSAS-II would call. There is a routine in GSAS-II, GSASIIstrMain.RefineCore() (source code), that calls different minimizers. The option for which is supplied in a separate part of the GUI. GSAS-II provides to the selected minimizer a set of values to optimize, a reference to the function used to compute the cost function (quality of the diffraction fit). Optionally, a function that computes either the Jacobian or Hessian can be supplied. Examples of routines that are called in this manner are scipy.optimize.leastsq, GSASIIstrMath.HessianLSQ, GSASIIstrMath.HessianSVD. The call signature probably requires a bit of explanation and there are some statistical results that ideally would be returned as results. There are advantages in either approach. The former allows GSAS-II to become an optional component in BUMPS, while the latter does the reverse and would allow our users to select BUMPS as one of several optimizers that are offered. (At present there is very little reason for anyone to chose other than the default, but I do have thoughts about someday offering other optimizer options.) The latter option would make life very simple for GSAS-II users. Seeing that BUMPS has been parallelized, there might be some implications on how that would affect things, but it is not clear to me exactly how that would work out. If neither of these approaches seem appealing, perhaps we should plan another discussion. |
I think that both of these approaches would work. I think that the GSAS-II
community would benefit from being able to incorporate BUMPS directly.
This is possible, and I can loop in Brian Maranville and Paul Kienzle if
we want to do this. As a first step, the parallelization capabilities of
BUMPS could be limited to multiple cores on the user's machine. A more
advanced implementation could allow it to run on a cluster. While
programmatically calling BUMPS is not so challenging, the user interface
will take some thought--especially to take advantage of global optimization.
For machine learning purposes, option 1 would be the solution--but it
wouldn't necessarily be BUMPS making the call, but rather any other
library/service which needs to call GSAS-II via python. I believe this
would likely involve exposing a bit more of the GSAS-II interface compared
to what is currently available in scriptable.
Best,
William
…On Fri, Dec 27, 2024 at 1:02 PM Brian Toby ***@***.***> wrote:
I took a look at the docs. I can see there is a lot there. There are at
least two ways that I could imagine interfacing BUMPS and GSAS-II.
One way would be to have BUMPS supply a structure & a pattern to GSAS-II,
which would provide a computed pattern back, possibly after optimizing some
experimental parameters (such as the scale factor). This would be done by
having BUMPS execute a very short set of Python commands. This all exists
now, see simulation example
<https://gsas-ii-scripting.readthedocs.io/en/latest/GSASIIscriptable.html#pattern-simulation>
or fitting example
<https://gsas-ii-scripting.readthedocs.io/en/latest/GSASIIscriptable.html#simple-refinement>,
but I'd want to revisit the coding in GSAS-II to have it suspend itself
between optimization cycles and wait for revised input rather than what is
done now, where a new Python session and diffraction data plus the full
structure is read on each optimization cycle.
The other way would be to for GSAS-II use BUMPS as a minimization engine.
This would be done by having BUMPS expose a routine that GSAS-II would
call. There is a routine in GSAS-II, GSASIIstrMain.RefineCore()
<https://gsas-ii.readthedocs.io/en/latest/GSASIIstruc.html#GSASIIstrMain.RefineCore> (source
code)
<https://gsas-ii.readthedocs.io/en/latest/_modules/GSASIIstrMain.html#RefineCore>,
that calls different minimizers. The option for which is supplied in a
separate part of the GUI. GSAS-II provides to the selected minimizer a set
of values to optimize, a reference to the function used to compute the cost
function (quality of the diffraction fit). Optionally, a function that
computes either the Jacobian or Hessian can be supplied. Examples of
routines that are called in this manner are scipy.optimize.leastsq,
GSASIIstrMath.HessianLSQ, GSASIIstrMath.HessianSVD. The call signature
probably requires a bit of explanation and there are some statistical
results that ideally would be returned as results.
There are advantages in either approach. The former allows GSAS-II to
become an optional component in BUMPS, while the latter does the reverse
and would allow our users to select BUMPS as one of several optimizers that
are offered. (At present there is very little reason for anyone to chose
other than the default, but I do have thoughts about someday offering other
optimizer options.) The latter option would make life very simple for
GSAS-II users. Seeing that BUMPS has been parallelized, there might be some
implications on how that would affect things, but it is not clear to me
exactly how that would work out.
If neither of these approaches seem appealing, perhaps we should plan
another discussion.
—
Reply to this email directly, view it on GitHub
<#76 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAA73WJ2DD5PVKOUU2XY7B32HWI4FAVCNFSM6AAAAABPZ7GIXCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNRTHEYTSMZVGM>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
The goal would be to use BUMPS (https://github.com/bumps/bumps) as an optimizer for models in GSAS-II. While we could wrap the whole library, what we would want to do is expose the parameters that go into the calculation of a powder profile. We would also want to be able to read in a pattern using GSAS-II and have that as a numpy arrays (x, y, esd) (or other well defined object). Given the calculated pattern(s) and actual pattern(s), then BUMPS can perform the actual fitting/optimization.
One question would be what's the best way to construct the objects necessary for pattern calculation from parameters and to expose those parameters in an extensible way. One could picture doing this for ad hoc cases such as a given set of profile functions, unit cell, unit cell contents, but if we want to eventually allow for magnetism or future developments, then we might want something more gneralizable.
The text was updated successfully, but these errors were encountered: