Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] JNI side is not setting the proper alignment for the limiting_resource_adaptor with ASYNC #10384

Closed
abellina opened this issue Mar 2, 2022 · 3 comments · Fixed by #10395
Labels
bug Something isn't working

Comments

@abellina
Copy link
Contributor

abellina commented Mar 2, 2022

In RmmJni, our use of limiting_resource_adaptor doesn't seem to be setting the proper alignment when wrapping the ASYNC allocator.

https://github.com/rapidsai/cudf/blob/branch-22.04/java/src/main/native/src/RmmJni.cpp#L360

There is an argument to the constructor of this adaptor to set the proper alignment, and it defaults to 256 bytes. From experiments, the ASYNC allocator looks to be aligning to 512 bytes, so we are going to be under counting if we are trying to track the amount of memory available in the pool.

@abellina abellina added bug Something isn't working Needs Triage Need team to review and classify labels Mar 2, 2022
@abellina
Copy link
Contributor Author

abellina commented Mar 2, 2022

Our tracking wrapper should also be updated to match 512 for ASYNC.

@rongou
Copy link
Contributor

rongou commented Mar 2, 2022

The CUDA programming guide only specifies

Any address of a variable residing in global memory or returned by one of the memory
allocation routines from the driver or runtime API is always aligned to at least 256 bytes.

The actual alignment size seems to be an internal decision of the driver/runtime, not a public api.

@abellina
Copy link
Contributor Author

abellina commented Mar 2, 2022

The actual alignment size seems to be an internal decision of the driver/runtime, not a public api.

Thanks @rongou. Then we shouldn't rely on the potential alignment we have seen in experiments. It seems we can't really limit the pool size then as it was intended with limiting_resource_adaptor, since the API doesn't make a guarantee.

rapids-bot bot pushed a commit that referenced this issue Mar 8, 2022
We use the `limiting_resource_adaptor` in front of the async allocator to track the total size of allocations and trigger spills. By default the adaptor aligns to 256, but the async allocator internally aligns to 512. This causes the limiting adaptor to under count allocation sizes, leading to OOMs in the async allocator. This PR changes the limiting adaptor to also use 512. This seems to help with a customer query that we were seeing OOMs before.

Note that CUDA is planning to switch back to 256 in the future so this may need to be revisited.

Fixes #10384

Authors:
  - Rong Ou (https://github.com/rongou)

Approvers:
  - Jason Lowe (https://github.com/jlowe)

URL: #10395
@bdice bdice removed the Needs Triage Need team to review and classify label Mar 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants