Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add missing atomic operators, refactor atomic operators, move atomic operators to detail namespace. #14962

Merged
merged 13 commits into from
Mar 12, 2024

Conversation

bdice
Copy link
Contributor

@bdice bdice commented Feb 3, 2024

Description

This PR does a thorough refactoring of device_atomics.cuh.

  • I moved all atomic-related functions to cudf::detail:: (making this an API-breaking change, but most likely a low-impact break)
  • I added all missing operators for natively supported types to atomicAdd, atomicMin, atomicMax, etc. as discussed in [BUG] Sum and multiply aggregations promote unsigned input types to a signed output #10149 and Revert sum/product aggregation to always produce int64_t type #14907.
  • I kept atomicAdd rather than cudf::detail::atomic_add in locations where a native CUDA overload exists, and the same for min/max/CAS operations. Aggregations are the only place where we use the special overloads. We were previously calling the native CUDA function rather than our special overloads in many cases, so I retained the previous behavior. This avoids including the additional headers that implement an unnecessary level of wrapping for natively supported overloads.
  • I enabled native 2-byte CAS operations (on unsigned short int) that eliminate the do-while loop and extra alignment-checking logic
    • The CUDA docs don't state this, but some forum posts claim this is only supported by compute capability 7.0+. We now have 7.0 as a lower bound for RAPIDS so I'm not concerned by this as long as builds/tests pass.
  • I improved/cleaned the documentation and moved around some code so that the operators were in a logical order.
  • I assessed the existing tests and it looks like all the types are being covered. I'm not sure if there is a good way to enforce that certain types (like uint64_t) are passing through native atomicAdd calls.

Checklist

  • I am familiar with the Contributing Guidelines.
  • New or existing tests cover these changes.
  • The documentation is up to date with these changes.

@github-actions github-actions bot added the libcudf Affects libcudf (C++/CUDA) code. label Feb 3, 2024
@bdice bdice changed the title Refactor atomic operators, move to detail namespace. Add missing atomic operators, and refactor/move atomic operators to detail namespace. Feb 3, 2024
@karthikeyann
Copy link
Contributor

I changed all uses of atomicAdd to use cudf::detail::atomic_add . I'd like to retain the previous behavior

I support to retain previous behaviour. If not, it introduces another templated header into include dependency of other files. Templated atomicAdd is created to support unsupported types in CUDA. It's not required for these other files.

@bdice bdice changed the title Add missing atomic operators, and refactor/move atomic operators to detail namespace. Add missing atomic operators, refactor atomic operators, move atomic operators to detail namespace. Feb 4, 2024
@bdice bdice self-assigned this Feb 4, 2024
@bdice bdice added improvement Improvement / enhancement to an existing function breaking Breaking change labels Feb 4, 2024
@GregoryKimball
Copy link
Contributor

@SurajAralihalli FYI

@bdice bdice marked this pull request as ready for review March 9, 2024 01:27
@bdice bdice requested a review from a team as a code owner March 9, 2024 01:27
@bdice bdice requested review from davidwendt and nvdbaranec March 9, 2024 01:27
@GregoryKimball
Copy link
Contributor

@SurajAralihalli Would you like to share a review on this change?

@bdice
Copy link
Contributor Author

bdice commented Mar 12, 2024

/merge

@rapids-bot rapids-bot bot merged commit 155405b into rapidsai:branch-24.04 Mar 12, 2024
73 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
breaking Breaking change improvement Improvement / enhancement to an existing function libcudf Affects libcudf (C++/CUDA) code.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants