-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update cuDF merge benchmark #867
Conversation
Since this benchmark is broken due to changes in CUDA context handling, it would be good if this still makes 0.27 (22.08), despite the ongoing burndown. As this is only a benchmark it should not impact release in any way, as this is not executed anywhere as part of that process. |
Can you explain what you mean by this? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor nitpicks, otherwise looks good.
cuDF creates a CUDA context at import time, which wasn't the case in the past, and is the reason we have rapidsai/dask-cuda#379 in Dask-CUDA, if we don't set |
Change clock to `time.monotonic()`, to prevent issues with clock going backwards in some systems.
FWIW, the only niggle I have left here is that the |
The RMM pool being a problem is expected with the default value, that's when the user will have to adjust |
I had naively thought that we would get Since this isn't a substantive change from the current behaviour, let's not pollute this PR with too many additional pieces (sorry, this is my fault for noticing!) and leave as-is for now. |
Co-authored-by: Lawrence Mitchell <[email protected]>
This reverts commit 2159d1d.
I think all changes are in now, could you check one last time/approve the PR @wence- ? |
@gpucibot merge |
I am happy to merge this, though does it need further approval to go to 0.27 (rather than 0.28) ? |
UCX-Py doesn't follow exactly the same conditions as the remaining of RAPIDS, and given there are no other required reviewers, we should be good to merge. Also gpucibot has no power here. :) |
OK, let's wait for tests. |
For the very small benchmark in CI, the assertion failed:
Maximum error was 30, and the actual value was 31. Since this is not critical, I increased the tolerance to 2% now. |
And it seems I was wrong. We have to ask for approval. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This adds several bugfixes and improvements:
CUDA_VISIBLE_DEVICES
;"key"
column and asserts expected result size;