-
Notifications
You must be signed in to change notification settings - Fork 204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a CUDA stream pool #659
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cmake / conda lgtm
*/ | ||
class cuda_stream_pool { | ||
public: | ||
static constexpr std::size_t default_size{16}; ///< Default stream pool size |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be private?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a default parameter to a public constructor, so I assumed it must be public...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The constructor has access to private fields. There is no reason for any caller to change this default, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From what I remember, Cython doesn't really work well with default args, so they usually need access to the definition of what is being used as the default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is constexpr, so a caller cannot change it. I made it public because I thought it might be useful for users to be able to access the default size at run time.
Co-authored-by: Jake Hemstad <[email protected]>
* | ||
* Provides efficient access to collection of CUDA stream objects. | ||
* | ||
* Successive calls may return a `cuda_stream_view` of identical streams. For example, a possible |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes this pool unusable for DALI - unwittingly submitting new (and potentially urgent) work on a stream that already has hundreds of milliseconds worth of work scheduled is not an option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You do know that there are hardware limits to concurrency, right? See table 15 in the CUDA programming guide. The current high end GPUs have at most 128 concurrent execution contexts, so using more than 128 streams is redundant -- you will get false dependencies even with more streams.
The size of the pool is configurable. Set it to 128 if you want 128 streams.
We can also discuss what the best default size is before we merge this. I chose 16 because that is the minimum maximum concurrency of GPUs that RAPIDS supports today. I didn't want to create hundreds of streams if the user doesn't need them.
We can also consider a pool that can grow, with a parameterized maximum size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we make the pool more complex, we can also have a notion of normal streams and high priority streams, e.g., get_stream
and get_priority_stream
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess that we have a very different idea of a stream pool then.
My idea (which I'm going to pursue in DALI) was to have an pool where you take streams from the pool and explicitly return them. That's more or less equivalent to create/destroy, but without the ugly handle reuse.
The pool would have two lists:
- ready list
- free list
When a client requests a stream, it's taken from the ready list. When it's returned to the pool, it's placed in the free list. Then there are several options:
- Add a callback to the stream, so that it's returned to the ready list when all work is complete.
- Call cudaStreamQuery periodically in a worker thread and collect ready streams.
- Call cudaStreamQuery on the streams in the free list when there's no ready stream and return the ready ones to the ready list (this is the least preferred option, as it introduces a potentially large and unpredictable overhead).
In any case, if there are no ready streams and there are no ready streams, a new stream is created and returned.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did consider a design with get_stream and return_stream, but I think this is early optimization. The present design is extremely simple, and will likely provide good performance for many use cases. If and when we find that the performance is insufficient, we can add complexity as needed. Certainly, a design that requires worker threads is much more complex, and I will not add that complexity to RMM without a real-world benchmark to drive it.
Closes #613. Adds a simple pool class which contains a circular buffer of
cuda_stream
objects with aget_stream
method to get a view of one of the streams. Also adds tests and a simple benchmark.Benchmark results. First line is getting a stream from the stream pool. Second is using the RAII
rmm::cuda_stream()
class which callscudaStreamCreate()
andcudaStreamDestroy()
.TODO in a follow up:
rmm::cuda_stream()
.