-
Notifications
You must be signed in to change notification settings - Fork 200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Added pinned memory resource #141
[WIP] Added pinned memory resource #141
Conversation
Can one of the admins verify this patch? |
1 similar comment
Can one of the admins verify this patch? |
ok to test |
We may need to hold off on merging this. @harrism I realized that using How do we want to handle this? Work around the test? Introduce a |
I think introducing |
Do we need to distinguish host vs. device at the base class level at all? Can they just all be |
If we follow that route, do we have any added benefits? |
I suspect we'd have a The larger question is, do we want RMM to become a host memory manager as well? This is currently in the README:
|
That statement from the readme is true until we make it false. It was never intended to be a philosophical statement of value. :) I think if our users need a suballocator for pinned host memory for use in CUDA, then it makes sense to support it. |
Agreed!
That's a wiser approach to take as the common elements will not have to be repeated, and moreover it makes sense to branch this way |
Sure, I just want to make sure we've thought through all the implications. For example:
It's not as simple as just adding a |
I guess let's put this PR on hold atm and maybe raise a discussion on the mechanisms and blueprint for this modification. Plan it out right with timeline and get it deployed layer by layer. Sounds good? |
To update this, in a meeting, Jake and I discussed that we could provide host memory_resource types that are not necessarily exposed through the rmm::alloc API, but are available to users. But until we have demand for it, we should wait. |
Alright. What do we do with this PR? |
Leave it open. |
For the cudf-IO readers/writers, we would like a pinned host memory resource. We currently manually call cudaMallocHost/Free for allocs used for packed parsing/decoding/encoding metadata that we transfer between CPU and GPU. |
How would you like to be able to use it though? As a resource for a |
Probably the latter |
FWIW we could utilize this nicely on the Python side as well where we know when a user is explicitly copying data from device to host and we're always responsible for allocating that memory and somewhat have control over that allocation. |
Is this still needed now that PR ( #272 ) is in? |
Closing this as it's been made obsolete by #272 |
This PR aims to add
pinned_memory_resource
. Following the specs posted to this issue.Fixes #136