-
Notifications
You must be signed in to change notification settings - Fork 506
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
scope without Send #562
Comments
I expect it's possible, but I'm not sure that we should. One reason that we enter the threadpool for |
With this change, generators monitor how quickly clients are draining queued jobs, and stop issuing jobs when they detect that clients have enough queued work to last for the remaining duration of the experiment. This is mostly a work-around for rayon-rs/rayon#544. Note that the load generator now runs *in* the thread pool, so the `threads` argument should now be set to the total number of cores rather than #core - #generators. This is due to rayon-rs/rayon#562. It's a little unfortunate because it means that *all* job distribution requires stealing (the generator will put all jobs on its local queue). Note also that (because of the same linked rayon issue) the creation of `id_rng` is now in a closure. This is so that the argument can be `Send` so we can get it into the thread pool in the first place.
I just bumped into this. I'm using scoped_threadpool as well as rayon and noticed the latter ostensibly contained the former's functionality, enabling me to unify on one threadpool. Unification would be convenient as I want a thread per core, and ensuring that over two independent threadpools is nontrivial (given I wouldn't want cores to be idle if one threadpool was full but one empty). |
I bumped into this too. https://github.com/reem/rust-scoped-pool is unmaintained so I'm trying to migrate to rayon's ThreadPool and in some places that doesn't work because the scope closure is not Send (and making it Send requires significant API changes in callers). |
What if we used Alternatively, I'm considering implementing a new |
It's worth trying! |
FWIW, #615 is changing the global queue to a |
I've run into a need for this in #676. |
+1 for this, Send requirement is very restrictive in some cases. I work primarily with webassembly, and none of the web api's are capable of working across different threads (!Send), which severely limits rayon usage. Scope without Send would allow me to do useful work on the worker thread with !Send stuff (inside scope closure), while at the same time running rayon jobs in scope. Currently i have to choose, either i make some progress with !Send stuff on the main thread, or i run rayon scope/join. |
I hacked together a version of this here: https://github.com/rocallahan/rayon/commits/downstream Apart from eliminating the Send bound on the OP closure it's a bit more efficient because you don't need to send the op across threads, when you're not already on a worker thread. Also, the op can safely wait for spawned closures to complete; with regular Should I clean it up and submit it? |
@rocallahan I would be interested in that, but I hope we don't have to fork I'm still wary of changing the current
Yes, but then everything you
This is only true if you're sure that you're not already on a worker thread. |
We can, but then there will be slightly higher overhead for
Yes, we can't break that, we need a new entry point.
Right.
Right. In our case we have a dedicated thread pool and restrict access to it so only specific code can run in it. |
I suspect a few branches won't be noticeable compared to the existing synchronization, but let's see what it looks like. |
The tricky bit here is the latch. Currently One thing I don't understand which may be relevant: Why is It might be simplest to allow (I considered trying to reuse |
You've jogged my memory enough to remember that I had tinkered with this, but not finished. I've now pushed that branch so I can share it here, in case that helps to compare notes and think through some of the issues: One possibility your prototype didn't cover was if we call this from one thread pool into another. It's not great to use a
I believe this is primarily so we can share the logic in the
Size shouldn't be of much concern, because the |
Ok.
Yes, this had crossed my mind and then I forgot about it. Stealing your |
Using |
Here's what I have: |
Also, is |
Naming is hard -- |
Submitted PR #844 |
844: Implement `in_place_scope` r=cuviper a=rocallahan As discussed in #562. Co-authored-by: Robert O'Callahan <[email protected]>
This ties somewhat into the discussion over in #522 (comment).
Currently,
ThreadPool::scope
requires the passed closure to beSend
because the closure itself is executed on the thread pool. However, ifThreadPool
is used as a more generic thread pool (rather than explicitly for data-dependent computation), it is not unreasonable for some existing thread to wish to spin off a number of jobs with access to its stack, and then wait for them all to complete (essentially as a pool of scoped threads). With theSend
bound in place, that thread is pretty restricted in what it can use to generate jobs (e.g., anything withRc
is a no-go). It'd be good if there was an alternate version ofscope
that did not requireSend
for its closure, and which instead executed the closure on the current thread (but still waited for any spawned jobs to complete).The text was updated successfully, but these errors were encountered: