-
Notifications
You must be signed in to change notification settings - Fork 917
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEA] cuDF Parquet reader should use pinned memory to copy data from sysmem to GPU #6376
Comments
This issue has been marked rotten due to no recent activity in the past 90d. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed. |
Get Outlook for Android<https://aka.ms/ghei36>
…________________________________
From: github-actions[bot] <[email protected]>
Sent: Tuesday, February 16, 2021 4:17:46 PM
To: rapidsai/cudf <[email protected]>
Cc: Julio Perez <[email protected]>; Mention <[email protected]>
Subject: Re: [rapidsai/cudf] [FEA] cuDF Parquet reader should use pinned memory to copy data from sysmem to GPU (#6376)
This issue has been marked rotten due to no recent activity in the past 90d. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#6376 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AI3X547NVAACC3Z5SSUM5MTS7LOHVANCNFSM4R7QMW6Q>.
|
We are waiting on having a pool of pinned memory in RMM. |
Once we switch to kvikIO for device reads, data from the input file will be transfered via pinned buffers in kvikIO's pool. I assume this will address the issue since that's the bulk of data copied onto the GPU. |
Is your feature request related to a problem? Please describe.
NvTabular is building asynchronous dataloader for accelerating tabular dataloading for DL framework like PyTorch and TensorFlow. The primary function of this async dataloader is prepare training data by reading from Parquet input files and preparing input tensors for training. Currently Parquet reader in cuDF uses pageable memory to copy parquet input to GPU for decompression. HostToDeviceMemcpy from pageable memory leads to a lock in CUDA context that prevents from submitting other CUDA API calls from the framework training thread. Nsight profiler image at the end illustrate the problem.
Describe the solution you'd like
Memmapped Parquet input files can be first staged on a pinned system memory before issuing H2D memcpy, this will have additional overhead of a CPU memcpy but it will be good to implement and understand the performance improvements (or regressions) from this approach.
Additional context
@jperez999 @benfred @EvenOldridge to help further define NVT priority for this.
The text was updated successfully, but these errors were encountered: