-
Notifications
You must be signed in to change notification settings - Fork 915
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Avoid "p2p" shuffle as a default when dask_cudf
is imported
#15469
Conversation
reason="Machine does not have more than three GPUs", | ||
) | ||
def test_unique(): | ||
with dask_cuda.LocalCUDACluster(n_workers=3) as cluster: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needed >2 workers to reproduce the error locally. Not sure what the problem is yet, but the to-pyarrow dispatch is failing to register on one or more workers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So that means the error is not reproducible by disabling P2P shuffle and this test confirms that, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR does something pretty simple, but the background is slightly confusing:
- Using the latest release of dask, this test would fail without the global
"tasks"
config that is set in this PR - After Add lazy "cudf" registration for p2p-related dispatch functions dask/dask#11040, this test will also pass when the
"p2p"
shuffle is used - Even though
"p2p"
works fordask:main
, I still think it makes sense to use the"tasks"
default (at least for now). Although"p2p"
should theoretically be more stable, I have not found this to be the case in practice for GPUs. Also,"tasks"
is definitely faster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly semantical suggestions which are not really blockers, otherwise LGTM.
@@ -54,7 +54,7 @@ def test_merge(): | |||
|
|||
|
|||
@pytest.mark.skipif( | |||
not more_than_two_gpus(), reason="Machine does not have more than two GPUs" | |||
not more_than_n_gpus(2), reason="Machine does not have more than two GPUs" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not more_than_n_gpus(2), reason="Machine does not have more than two GPUs" | |
not more_than_n_gpus(2), reason="Machine does not at least two GPUs" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just like the message was misleading, I believe the function name is too and I would suggest renaming it to at_least_n_gpus
or something more accurate.
|
||
@pytest.mark.skipif( | ||
not more_than_n_gpus(3), | ||
reason="Machine does not have more than three GPUs", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reason="Machine does not have more than three GPUs", | |
reason="Machine does not have at least three GPUs", |
reason="Machine does not have more than three GPUs", | ||
) | ||
def test_unique(): | ||
with dask_cuda.LocalCUDACluster(n_workers=3) as cluster: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So that means the error is not reproducible by disabling P2P shuffle and this test confirms that, right?
/merge |
Description
I was looking through some dask-related test failures in rapidsai/cuml#5819 and noticed that the "p2p" shuffle is causing some problems when query-planning is enabled. This PR sets the global default to "tasks". It may make sense to roll back this change once we fix the underlying problem(s), but I doubt it.
Checklist