-
Notifications
You must be signed in to change notification settings - Fork 310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEA] BFS #902
Comments
I'm worried that this work can largely be duplicate of PR838 See If distributed CSR is passed to this and raft communication library is ready to use, turning this to OPG implementation is not a big jump. The key part to be addressed is to build distributed graph data structure (in a way that enables further optimization) and raft handle & communicator. |
Does PR838 come with multi-node multi-GPU? I don't see comms or an OPG python API there. This issue is about an one process per GPU version while leveraging #838 locally as listed in the description. |
This does not include python binding. This PR is about the C++ part. This has raft Handle placeholder to be replaced with real raft Handle with comms. https://github.com/rapidsai/cugraph/pull/838/files To extend this implementation to OPG, need to fill several places with CUGRAPH_FAIL("unimplemented."); e.g. cpp/include/detail/patterns/expand_and_transform_if_e.cuh line 326 |
Start in with a 1D distribution consistent with #485 design
C++ API accepts local CSR and RAFT handle/comms, ETL is dealt with at the python level through #812 an #813
At a high level, the first simple BFS would consist of iteratively doing:
Analyze and decide on optimizing 1D or start enabling 2D stack and ETL pipeline.
The text was updated successfully, but these errors were encountered: