-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: MPI support for sequential tasks #11
Comments
Hi, Basically this should work. But what you have run into for some reason with that error is Parallel run mode. Usually that should not happen if you use But Parallel run mode would enable you to do something similar: you can MPI-parallelize your own section of the code. That is why it asks you to implement this version of function Carlo.sweep!(mc::MC, ctx::MCContext, comm::MPI.Comm)
sample_gradients_in_parallel!(mc, ctx, comm)
if time_to_update_parameters()
update_parameters!(mc, comm)
end
end The downside is that you have to write some MPI code yourself, but the result will be faster because you don’t have to write everything to disk every time. |
Thanks for your prompt reply! I'll further check the Parallel run mode out. |
The new sweep function has to do everything the old one does and then some manual communication to exchange the data for the gradients before updating the parameters and syncing them across the workers. (Probably you can get away with an MPI.gather and MPI.broadcast)
Instead of using the Carlo measurements for accumulating the gradients, you would have to average them manually.
…-------- Original Message --------
On 7/28/24 04:00, LeoXia wrote:
Thanks for your prompt reply! I'll further check the Parallel run mode out.
One more question here, do you mean that I should wrap the updateConfiguration (the old sweep!(mc, ctx) function), the measure! functions into the new sweep!(mc,ctx,comm) function? And thus new sweep!(mc,ctx,comm) will only update the parameters in a sweep?
—
Reply to this email directly, [view it on GitHub](#11 (comment)), or [unsubscribe](https://github.com/notifications/unsubscribe-auth/ALX63H6RENQK6CZBMOKMFCTZOSQJHAVCNFSM6AAAAABLRXNWZWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENJUGM4DGNZZG4).
You are receiving this because you commented.Message ID: ***@***.***>
|
Got it, thanks for you help! And I think this issue can be closed. |
Hello! Thanks for your robust and nice-documented package!
In Variational Monte Carlo tasks one needs to perform Monte Carlo calculation in sequence. That's because the variational parameter of the subsequent task should relate to the MC results from the previous one. To employ this, I hacked the job file as follow:
It runs all well with
SingleScheduler
, but withMPIScheduler
it throws errors likeStacktrace:running in parallel run mode but measure(::MC, ::MCContext, ::MPI.Comm) not implemented
. I wonder if MPI in this job script is simplyNot Supported
or can be done with the configuration ofMPI.comm
? I still use thempirun -n 96 julia ./job.jl
command line script to run this job.To put it more further, will it be more elegant to add this feature into the
JobTools
module? Like lettingtasks=make_tasks(tm)
have differentparallel
andsequential
modes? I'm not quite sure if it is easy to be done.The text was updated successfully, but these errors were encountered: