[BUG] Importing cuml causes all Dask partitions to associate with GPU 0 #5206
Labels
bug
Something isn't working
Dask / cuml.dask
Issue/PR related to Python level dask or cuml.dask features.
Describe the bug
On a
LocalCUDACluster
with multiple GPUs, I am observing all Dask partitions to be allocated to GPU 0, causing XGBoost to error out. Weirdly, removingimport cuml
fixes the problem.Steps/Code to reproduce bug
Run this Python script:
With
import cuml
commented out, the Python program runs successfully:If
import cuml
is un-commented, we get an error:This is because all the Dask partitions were allocated to GPU 0. See the output from
nvidia-smi
:Expected behavior
Importing cuML should not affect the behavior of Dask arrays.
Environment details (please complete the following information):
The text was updated successfully, but these errors were encountered: