Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed an issue when switching streams on device_buffer and host_buffer #358

Merged
merged 5 commits into from
Apr 4, 2019

Conversation

jirikraus
Copy link
Contributor

No description provided.

ml-prims/src/common/buffer_base.hpp Outdated Show resolved Hide resolved
@teju85
Copy link
Member

teju85 commented Mar 19, 2019

@jirikraus Not necessarily related to this PR, but I still wanted to get this confirmed. Why is there a need to pass stream argument to almost every method of device_buffer, for eg? Can't we cache the stream argument passed during the buffer construction and use it everywhere? Do you see a scenario where we need to change the stream for a given buffer itself?

@jirikraus
Copy link
Contributor Author

@jirikraus Not necessarily related to this PR, but I still wanted to get this confirmed. Why is there a need to pass stream argument to almost every method of device_buffer, for eg? Can't we cache the stream argument passed during the buffer construction and use it everywhere? Do you see a scenario where we need to change the stream for a given buffer itself?

There are two reasons:

  1. I like being explicit with this kind of things as an implicitly defined stream is something one easily forgets about.
  2. Pass in in a stream makes it obvious that the operations follow stream semantics and might execute asynchronous. We can certainly argue about this, but I am preferring this over adding an asynchronous prefix to every function.

Also there is no overhead in passing the stream in. The reason to store the stream is that we need to insert inter a stream dependency in case the stream i changed to avoid confusing behavior and that we need a stream in the destructor. Does that make sense to you?

@teju85
Copy link
Member

teju85 commented Mar 20, 2019

I get the reason for why we need a stream argument in the constructor of device_buffer and completely agree with your reasons for that.

I'm trying to see why we'd need to change the original stream for this object after it is created. Because changing the stream, as this PR rightfully addresses, involves a stream-sync, which is a synchronous operation (for that stream ofcourse).

@jirikraus
Copy link
Contributor Author

I'm trying to see why we'd need to change the original stream for this object after it is created. Because changing the stream, as this PR rightfully addresses, involves a stream-sync, which is a synchronous operation (for that stream ofcourse).

I agree that changing the stream could potentially introduce false inter stream dependencies. However it will not be a synchronous operation with respect to the CPU. Also in performance critical code my expectation is that the stream used is always the same. I.e. there is likely no performance impact, but it still makes the API more flexible for the cases where we need this. Makes sense?

@teju85
Copy link
Member

teju85 commented Mar 21, 2019

Ah.. just noticed that you are using cudaStreamWaitEvent if we switch streams inside device_buffer.

TBH, I'd expect users to NOT switch the stream during the lifetime of a given device_buffer object. But, given that this switch is not a costly one, while being a generic interface, I guess we can keep it this way. Thanks Jiri.

_data = static_cast<value_type*>(_allocator->allocate( _capacity*sizeof(value_type), _stream ));
CUDA_CHECK( cudaStreamSynchronize( _stream ) );
_data = static_cast<value_type*>(_allocator->allocate( _capacity*sizeof(value_type), get_stream() ));
CUDA_CHECK( cudaStreamSynchronize( get_stream() ) );
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jirikraus Sorry, I realize that this question is not related to this PR! Do we need this synchronize step? I suppose, as long as the caller adheres to the stream semantics, we should be fine without it, no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC we already touched on this in a discussion on the PR which introduced this data structure. From my point of view the expectation of a user of host_buffer or device_buffer is that it is usable after constructions. Thus the synchronization to avoid surprises. However I agree that given we pass in a stream that surprise of some users is something we can probably live with. Btw. if you want a fully asynchronous construction you can do it with:

device_buffer tmp( allocator, stream, 0 )
tmp.resize(n,stream);

I am undecided what the better option is. Any other thoughts on this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

host_buffer can also potentially be directly accessed from CPU code. So, it does make sense to put a sync call there to avoid surprises.

However, after a device_buffer is constructed, the only way it can be touched is by using a cuda api or a cuda kernel, both of which adhere to stream semantics. Thus, as long as the underlying allocator adheres to the stream semantics, we could live without a sync here. What say?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes that is a good argument. Only concern I have with that it would be confusing if host_buffer and device_buffer have different semantics.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree that this can be confusing to some folks. May be we could document this difference somewhere so that our devs are aware?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@teju85 that's not what I meant. As I wrote: "I like the guarantee that the memory is immediately valid and available to all streams. I think it probably leads to fewer bugs."

Copy link
Member

@teju85 teju85 Apr 1, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@harrism I was referring to your following statement about RMM: In fact, even though RMM takes a stream in rmmAlloc(), the returned memory is immediately available for use on any stream!

I was wishing for any custom device allocators people might want to use with device_buffer, we could have this a guarantee, atleast for the specific stream in the ctor.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could have a parameter that you set with the allocator. Call cudaStreamSynchronize() (or cudaDeviceSynchronize()) if the allocator is asynchronous, and don't call it if not.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for all the inputs. I don't think we have come to a conclusion yet. @teju85 what do you think if we move the discussion if the constructor of device_buffer should synchronize or not to an issue and move on with this pull request (which as you state initially is unrelated to the open discussion) as it is.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. Doesn't make sense to block this PR for this discussion. Have filed issue #425 and let's migrate this discussion over there.

Copy link
Member

@teju85 teju85 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes LGTM. @dantegd once the conflicts have been resolved and CI issues have been fixed, this can be merged.

@jirikraus
Copy link
Contributor Author

Changes LGTM. @dantegd once the conflicts have been resolved and CI issues have been fixed, this can be merged.

I resolved the conflict on CHANGELOG.md. @dantegd is there anything else I need to do?

@dantegd dantegd merged commit 42a5024 into rapidsai:branch-0.7 Apr 4, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants