Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change binops for-each kernel to thrust::for_each_n #17419

Merged
merged 4 commits into from
Nov 26, 2024

Conversation

davidwendt
Copy link
Contributor

Description

Replaces the custom for_each_kernel in binary_ops.cuh with thrust::for_each_n

Checklist

  • I am familiar with the Contributing Guidelines.
  • New or existing tests cover these changes.
  • The documentation is up to date with these changes.

@davidwendt davidwendt added 2 - In Progress Currently a work in progress libcudf Affects libcudf (C++/CUDA) code. improvement Improvement / enhancement to an existing function non-breaking Non-breaking change labels Nov 22, 2024
@davidwendt davidwendt self-assigned this Nov 22, 2024
@davidwendt
Copy link
Contributor Author

davidwendt commented Nov 22, 2024

Binary-ops benchmark runs comparison for RTX A6000
binops-benchmarks.txt

There are some interesting regressions but also some significant speedups.

@davidwendt
Copy link
Contributor Author

davidwendt commented Nov 22, 2024

Using exec_policy_nosync shows much better results overall.
binops-benchmarks2.txt

@davidwendt davidwendt added 3 - Ready for Review Ready for review by team and removed 2 - In Progress Currently a work in progress labels Nov 22, 2024
@davidwendt davidwendt marked this pull request as ready for review November 22, 2024 21:22
@davidwendt davidwendt requested a review from a team as a code owner November 22, 2024 21:22
@davidwendt
Copy link
Contributor Author

/merge

@rapids-bot rapids-bot bot merged commit 4e3afd2 into rapidsai:branch-25.02 Nov 26, 2024
105 checks passed
@davidwendt davidwendt deleted the binops-for-each branch November 26, 2024 01:00
@karthikeyann
Copy link
Contributor

karthikeyann commented Nov 26, 2024

Interesting!
NVIDIA/cccl#1302 PR introduced cub::DeviceFor::ForEachN and further improvements to thrust::for_each. It's a good idea to consider replacing all other custom for_each like kernels with thrust::for_each

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3 - Ready for Review Ready for review by team improvement Improvement / enhancement to an existing function libcudf Affects libcudf (C++/CUDA) code. non-breaking Non-breaking change
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants