-
Notifications
You must be signed in to change notification settings - Fork 539
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed an issue when switching streams on device_buffer and host_buffer #358
Conversation
…MLCommon::host_buffer
@jirikraus Not necessarily related to this PR, but I still wanted to get this confirmed. Why is there a need to pass |
There are two reasons:
Also there is no overhead in passing the stream in. The reason to store the stream is that we need to insert inter a stream dependency in case the stream i changed to avoid confusing behavior and that we need a stream in the destructor. Does that make sense to you? |
I get the reason for why we need a stream argument in the constructor of I'm trying to see why we'd need to change the original stream for this object after it is created. Because changing the stream, as this PR rightfully addresses, involves a stream-sync, which is a synchronous operation (for that stream ofcourse). |
I agree that changing the stream could potentially introduce false inter stream dependencies. However it will not be a synchronous operation with respect to the CPU. Also in performance critical code my expectation is that the stream used is always the same. I.e. there is likely no performance impact, but it still makes the API more flexible for the cases where we need this. Makes sense? |
Ah.. just noticed that you are using TBH, I'd expect users to NOT switch the stream during the lifetime of a given |
_data = static_cast<value_type*>(_allocator->allocate( _capacity*sizeof(value_type), _stream )); | ||
CUDA_CHECK( cudaStreamSynchronize( _stream ) ); | ||
_data = static_cast<value_type*>(_allocator->allocate( _capacity*sizeof(value_type), get_stream() )); | ||
CUDA_CHECK( cudaStreamSynchronize( get_stream() ) ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jirikraus Sorry, I realize that this question is not related to this PR! Do we need this synchronize step? I suppose, as long as the caller adheres to the stream semantics, we should be fine without it, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC we already touched on this in a discussion on the PR which introduced this data structure. From my point of view the expectation of a user of host_buffer
or device_buffer
is that it is usable after constructions. Thus the synchronization to avoid surprises. However I agree that given we pass in a stream that surprise of some users is something we can probably live with. Btw. if you want a fully asynchronous construction you can do it with:
device_buffer tmp( allocator, stream, 0 )
tmp.resize(n,stream);
I am undecided what the better option is. Any other thoughts on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
host_buffer
can also potentially be directly accessed from CPU code. So, it does make sense to put a sync call there to avoid surprises.
However, after a device_buffer
is constructed, the only way it can be touched is by using a cuda api or a cuda kernel, both of which adhere to stream semantics. Thus, as long as the underlying allocator adheres to the stream semantics, we could live without a sync here. What say?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes that is a good argument. Only concern I have with that it would be confusing if host_buffer
and device_buffer
have different semantics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree that this can be confusing to some folks. May be we could document this difference somewhere so that our devs are aware?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@teju85 that's not what I meant. As I wrote: "I like the guarantee that the memory is immediately valid and available to all streams. I think it probably leads to fewer bugs."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@harrism I was referring to your following statement about RMM: In fact, even though RMM takes a stream in rmmAlloc(), the returned memory is immediately available for use on any stream!
I was wishing for any custom device allocators people might want to use with device_buffer
, we could have this a guarantee, atleast for the specific stream in the ctor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could have a parameter that you set with the allocator. Call cudaStreamSynchronize() (or cudaDeviceSynchronize()) if the allocator is asynchronous, and don't call it if not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for all the inputs. I don't think we have come to a conclusion yet. @teju85 what do you think if we move the discussion if the constructor of device_buffer
should synchronize or not to an issue and move on with this pull request (which as you state initially is unrelated to the open discussion) as it is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. Doesn't make sense to block this PR for this discussion. Have filed issue #425 and let's migrate this discussion over there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes LGTM. @dantegd once the conflicts have been resolved and CI issues have been fixed, this can be merged.
No description provided.