-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[14.1.X] Generalize warp size in pixel clustering kernel #46444
Conversation
A new Pull Request was created by @AuroraPerego for CMSSW_14_1_X. It involves the following packages:
@cmsbuild, @jfernan2, @mandrenguyen can you please review it and eventually sign? Thanks. cms-bot commands are listed here
|
cms-bot internal usage |
please test |
+1 Size: This PR adds an extra 12KB to repository Comparison SummarySummary:
|
+1 |
This pull request is fully signed and it will be integrated in one of the next CMSSW_14_1_X IBs (tests are also fine) and once validation in the development release cycle CMSSW_14_2_X is complete. This pull request will now be reviewed by the release team before it's merged. @rappoccio, @sextonkennedy, @mandrenguyen, @antoniovilela (and backports should be raised in the release meeting by the corresponding L2) |
backport of #46426 |
what's the motivation to backport this PR to 14_1_X? |
We need it in 14.0.x to test and measure the performance of the HLT on AMD GPUs. Usually when making a backport for a given release (e.g. 14.0), we were asked to make it also for the more recent release (so 14.1). It might also be useful if we later decide to test the Heavy Ions HLT on AMD GPUs. |
+1 |
PR description:
In the
FindClus
kernel there was an assumption on the warp size being equal to 32, which is true for NVIDIA GPUs, but not always for AMD GPUs. Now the warp size is taken from the__AMDGCN_WAVEFRONT_SIZE
macro for HIP and uses 32 for CUDA.Note that
alpaka::warp::getSize(acc)
cannot be used because it's evaluated at runtime.If this PR is a backport please specify the original PR and why you need to backport that PR. If this PR will be backported please specify to which release cycle the backport is meant for:
backport of #46426
FYI @fwyzard