-
Notifications
You must be signed in to change notification settings - Fork 533
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to control numThreads in CPU platform? #242
Comments
that sounds reasonable -- the default is the max but you can pass a number and openMM will use min(cores_requested, total_detected) Thanks, Vijay Sent from my Phone. Sorry for the brevity or unusual tone. On Dec 12, 2013, at 1:22 PM, kyleabeauchamp [email protected] wrote:
|
PS: let if you've already determined that your threading code gets perfect linear scaling, in which case this request would be less useful. One other possible use case of capping the number of threads would be to maintain system responsiveness for multitasking systems. |
Would a platform property might be more appropriate than an environment variable? An env variable would make the API similar to OpenMP, but I think a platform property might fit with the rest of OpenMM better. |
Think so too! Von Samsung Mobile gesendet -------- Ursprüngliche Nachricht -------- Von: Robert McGibbon [email protected] Datum:17.12.2013 12:13 (GMT+01:00) An: SimTk/openmm [email protected] Betreff: Re: [openmm] How to control numThreads in CPU platform? (#242) — |
Either works for me
|
On Tue, Dec 17, 2013 at 6:13 AM, Robert McGibbon
+1 Jason M. Swails |
I like the idea of a platform property, since this can more easily be changed via the API. However, the CPU platform could set the default for this property using the environment variable if it exists. That may be the best of both worlds. |
In light of #323 I would say being able to control the number of threads externally would be useful in trying to diagnose a possible race condition.
Also in light of #323 I cast my support in favor of this idea. Without an environment variable, one has to either change the test program itself to set the number of threads property or change the actual code itself to specify the number of threads to use. It would simplify debugging this latest issue if we could simply issue a command like
to disable multithreading and see if the segfaults go away. I may be going about this debugging thing all wrong, but such a feature would be helpful the way I (know how to) debug. OpenMP uses this approach, and probably for good reason... |
I too support the idea of allowing the user control the number of threads. In my case, it would be useful to run multiple simulations simultaneously on the same system, in which case the number of threads each simulation is allocated should be specified. How about creating a CPU platform property that controls this, with the default being the number of threads on the system unless an environment variable is set, in which case that number is used as the default instead? |
Just wanted to bump this feature request to allow the user to control the number of threads. Created SimTK feature request: |
I think this is already solved, right? (https://simtk.org/tracker/index.php?func=detail&aid=1991&group_id=161&atid=436) |
Correct. There's a platform property, and also an environment variable. |
For the benefit of people arriving here from google, the environent variable for controlling the number of threads started per cpu is: Is there a reason why the more traditional |
|
So from what I've looked at, the CPU platform always uses the maximum number of threads as determined by
getNumProcessors
in hardware.h. Is this correct?If so, would it be possible to let an environment variable override this behavior?
My use case for this would be in Yank where we run replica exchange with multiple simulations on a single CPU. I suspect that I would achieve better performance by running
n
independent simulations than by running 1 simulation onn
threads.Thoughts?
The text was updated successfully, but these errors were encountered: