Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MNMG Neighborhood Sampling #2073

Merged
merged 73 commits into from
Mar 17, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
73 commits
Select commit Hold shift + click to select a range
acf15bc
Merge pull request #37 from rapidsai/branch-0.17
aschaffer Nov 30, 2020
a584e0b
Merge pull request #38 from rapidsai/branch-0.18
aschaffer Dec 9, 2020
338b2d4
Merge pull request #39 from rapidsai/branch-0.18
aschaffer Dec 30, 2020
efd7e9a
Merge pull request #40 from rapidsai/branch-0.18
aschaffer Jan 12, 2021
a23ce0d
Merge pull request #41 from rapidsai/branch-0.19
aschaffer Feb 25, 2021
22b32d8
Merge pull request #42 from rapidsai/branch-0.19
aschaffer Mar 1, 2021
e5042fc
Merge pull request #43 from rapidsai/branch-0.19
aschaffer Mar 20, 2021
ca02ee4
Merge pull request #44 from rapidsai/branch-0.19
aschaffer Apr 7, 2021
79409e7
Merge pull request #45 from rapidsai/branch-0.20
aschaffer Apr 13, 2021
360c293
Merge pull request #46 from rapidsai/branch-0.20
aschaffer Apr 28, 2021
e57f261
Merge pull request #47 from rapidsai/branch-0.20
aschaffer May 6, 2021
a63cdea
Merge pull request #48 from rapidsai/branch-21.08
aschaffer Jun 9, 2021
7e6bf4c
Merge branch 'rapidsai:branch-21.08' into branch-21.08
aschaffer Jun 15, 2021
6f3b342
Merge branch 'rapidsai:branch-21.08' into branch-21.08
aschaffer Jul 19, 2021
fbf5f36
Merge branch 'rapidsai:branch-21.10' into branch-21.10
aschaffer Aug 5, 2021
ffacad0
Merge branch 'rapidsai:branch-21.10' into branch-21.10
aschaffer Aug 13, 2021
8db2e74
Merge branch 'rapidsai:branch-21.10' into branch-21.10
aschaffer Aug 17, 2021
e560fd8
Merge branch 'rapidsai:branch-21.12' into branch-21.12
aschaffer Oct 11, 2021
a897416
Merge branch 'rapidsai:branch-22.02' into branch-22.02
aschaffer Dec 6, 2021
6a8000c
Merge branch 'rapidsai:branch-22.02' into fea_ext_mnmg_mfg
aschaffer Jan 26, 2022
8b363bb
Merge branch 'rapidsai:branch-22.04' into fea_ext_mnmg_mfg
aschaffer Feb 1, 2022
a8b3f84
Added NBR Sampling header and API.
aschaffer Feb 1, 2022
1a7660c
update on uniform_nbr_sample() to reflect latest requirements.
aschaffer Feb 9, 2022
72a90d0
project() version with zip iterators.
aschaffer Feb 10, 2022
a76a743
Added degree stub calls.
aschaffer Feb 11, 2022
1ad5e71
Merge branch 'branch-22.04' of github.com:rapidsai/cugraph into fea_e…
aschaffer Feb 11, 2022
fd0a383
Added steps up-to gather.
aschaffer Feb 11, 2022
7434d33
Union step.
aschaffer Feb 11, 2022
3a0953b
Minor fixes and updates.
aschaffer Feb 14, 2022
acd2b52
Added _impl/API. And shuffle scaffolding.
aschaffer Feb 14, 2022
e22200b
Shuffle step: shuffle_to_gpus().
aschaffer Feb 16, 2022
f2df103
Shuffle step: project to new input.
aschaffer Feb 16, 2022
c5c50ff
Merge branch 'branch-22.04' of github.com:rapidsai/cugraph into fea_e…
aschaffer Feb 16, 2022
225277f
Clean-up.
aschaffer Feb 17, 2022
b3b0304
Fixed some calls that had their API changed and refactored seeder_t.
aschaffer Feb 17, 2022
6c00250
TU for nbr_sampling, but need to fix linker errors.
aschaffer Feb 18, 2022
d284931
Fixed some of the linker errors.
aschaffer Feb 18, 2022
ecee30a
Merge branch 'branch-22.04' of github.com:rapidsai/cugraph into fea_e…
aschaffer Feb 22, 2022
199e18b
Fixed Rng call.
aschaffer Feb 23, 2022
91ad3dd
Fixed groupby_gpu_id_and_shuffle_values() call.
aschaffer Feb 23, 2022
8d310e4
Refcatored the shuffling-to-gpu step.
aschaffer Feb 23, 2022
ba9fc30
Removed some unnecessary copies.
aschaffer Feb 24, 2022
1e70cd0
Handle the removal of corresponding invalid entries from indices.
aschaffer Feb 24, 2022
4c703c7
Added shuffling to target GPUs.
aschaffer Feb 25, 2022
8550020
Merge branch 'branch-22.04' of github.com:rapidsai/cugraph into fea_e…
aschaffer Mar 1, 2022
d88c80e
Minor updates.
aschaffer Mar 1, 2022
79de6c3
Expose API.
aschaffer Mar 2, 2022
a1c06d0
Added nbr sampling test. Preliminiary.
aschaffer Mar 2, 2022
08ad3da
More test checking features and refactoring.
aschaffer Mar 4, 2022
c8ff67d
Added input gather for preliminary checking validity of tests.
aschaffer Mar 7, 2022
bd3e069
Checking results by coloring and rank affinity.
aschaffer Mar 7, 2022
5a89958
Added some safeguards and fixes.
aschaffer Mar 7, 2022
d75e64f
Debug NBR Sampling test.
aschaffer Mar 8, 2022
97dc281
Debug helpers (temporary).
aschaffer Mar 10, 2022
c6ffb50
More debugging helpers and sentry points.
aschaffer Mar 10, 2022
5d67c25
Fix by kaatish in the nbr sampling.
aschaffer Mar 11, 2022
488b352
Hide debug functionality behind ifdef fence.
aschaffer Mar 11, 2022
ca1a8a8
Enabled more tests.
aschaffer Mar 11, 2022
73fbe37
Enabled rmat tests.
aschaffer Mar 11, 2022
08cd16d
Enabled more rmat tests.
aschaffer Mar 11, 2022
1137531
Enabled all rmat tests.
aschaffer Mar 11, 2022
620dbb1
Added flag_replacement=false tests.
aschaffer Mar 11, 2022
cb70d7d
Cleaned up debug code.
aschaffer Mar 14, 2022
c47f0a7
Merge branch 'branch-22.04' of github.com:rapidsai/cugraph into fea_e…
aschaffer Mar 14, 2022
774b1b3
Re-activated a few gather utils tests. Re-run clang-format manually.
aschaffer Mar 14, 2022
e0a4200
Addressed code review comments.
aschaffer Mar 15, 2022
1fd0e50
Addressed more code review comments (rename src, test).
aschaffer Mar 15, 2022
481a4f6
Addressed more code review comments (rename impl.cuh).
aschaffer Mar 15, 2022
6f55fcf
Addressed more code review comments (re-activated disabled tests).
aschaffer Mar 15, 2022
626dbf8
Addressed more code review comments (gather_local_edges() index argum…
aschaffer Mar 15, 2022
cbedc59
CI patch.
aschaffer Mar 15, 2022
596277f
Reverted CI patch, as it did not help.
aschaffer Mar 16, 2022
ca4dcdb
Merge branch 'branch-22.04' into fea_ext_mnmg_mfg
aschaffer Mar 16, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions cpp/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -184,6 +184,7 @@ add_library(cugraph SHARED
src/sampling/neighborhood.cu
src/sampling/random_walks.cu
src/sampling/detail/gather_utils_impl.cu
src/sampling/nbr_sampling_mg.cu
src/cores/legacy/core_number.cu
src/cores/core_number_sg.cu
src/cores/core_number_mg.cu
Expand Down
35 changes: 35 additions & 0 deletions cpp/include/cugraph/algorithms.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -1538,4 +1538,39 @@ void core_number(raft::handle_t const& handle,
size_t k_last = std::numeric_limits<size_t>::max(),
bool do_expensive_check = false);

/**
* @brief Multi-GPU Uniform Neighborhood Sampling.
seunghwak marked this conversation as resolved.
Show resolved Hide resolved
*
* @tparam graph_view_t Type of graph view.
* @tparam gpu_t Type of rank (GPU) indices;
* @tparam index_t Type used for indexing; typically edge_t
aschaffer marked this conversation as resolved.
Show resolved Hide resolved
* @param handle RAFT handle object to encapsulate resources (e.g. CUDA stream, communicator, and
* handles to various CUDA libraries) to run graph algorithms.
* @param graph_view Graph View object to generate NBR Sampling on.
* @param ptr_d_starting_vertices Device array of starting vertex IDs for the NBR Sampling.
* @param ptr_d_ranks Device array of: rank IDs (GPU IDs) for the NBR Sampling.
* @param num_starting_vertices size of starting vertex set
* @param h_fan_out vector of branching out (fan-out) degree per source vertex for each level
* parameter used for obtaining local out-degree information
* @param with_replacement boolean flag specifying if random sampling is done with replacement
* (true); or, without replacement (false); default = true;
* @return tuple of tuple of device vectors and counts:
* ((vertex_t source_vertex, vertex_t destination_vertex, int rank, edge_t index), rx_counts)
*/
template <typename graph_view_t,
typename gpu_t,
typename index_t = typename graph_view_t::edge_type>
std::tuple<std::tuple<rmm::device_uvector<typename graph_view_t::vertex_type>,
rmm::device_uvector<typename graph_view_t::vertex_type>,
rmm::device_uvector<gpu_t>,
rmm::device_uvector<index_t>>,
std::vector<size_t>>
uniform_nbr_sample(raft::handle_t const& handle,
graph_view_t const& graph_view,
typename graph_view_t::vertex_type const* ptr_d_starting_vertices,
gpu_t const* ptr_d_ranks,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A thought to consider. The name and template type name for this parameter seem imply a particular implementation for the calling program. The notion of a rank is really an MPI notion that we use in our comms work. If a program calling this isn't using MPI or our comms then it's not necessarily clear what they would do here.

It seems to me that what we need is some sort of label that identifies where we want to direct the results of the sampling for the corresponding seed. Perhaps label_t and ptr_d_labels might be a bit more generic.

Copy link
Collaborator Author

@aschaffer aschaffer Mar 15, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is targeting a very specific implementation with MPI (NCCL) using MPI/NCCL prims. For this very reason alone we should actually suggest the affiliation with ranks. In addition, this is what the user (GNN consumer) wants to call them.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, I also found the word rank confusing in this context at first since it wasn't clear if it was intended to be a GPU rank for use by the sampling algo itself, or a rank used by the caller (ie. a client may have their own notion of rank independent of the MG rank used by this algo). I think a more general-purpose term like label or id not only works in this case, but can also be used as a more generic mechanism for associating results to a caller's seeds. Lastly, the GNN client will likely not be directly calling this API anyway (they'll be using python), so we can make something more application-specific for them in python if necessary.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the users wouldn't call directly this API then they don't have to worry about how we call this internally. The world rank came in GNN meetings and emerged as part of what that community of developers seemed comfortable with.

And again, if rank suggests MPI, it's because it's supposed to. This implementation is very much tied to MPI (via NCCL) and is not generic enough to justify other names, like label. Other approaches, like say NVSHMEM won't have ranks but then they wouldn't need the labels either. This so-called labels have one meaning only: ranks.

size_t num_starting_vertices,
std::vector<int> const& h_fan_out,
bool with_replacement = true);

} // namespace cugraph
28 changes: 15 additions & 13 deletions cpp/include/cugraph/detail/graph_functions.cuh
Original file line number Diff line number Diff line change
Expand Up @@ -158,32 +158,34 @@ partition_information(raft::handle_t const& handle, GraphViewType const& graph_v
* Collect all the edges that are present in the adjacency lists on the current gpu
*
* @tparam GraphViewType Type of the passed non-owning graph object.
* @tparam EdgeIndexIterator Type of the iterator for edge indices.
* @tparam GPUIdIterator Type of the iterator for gpu id identifiers.
* @param handle RAFT handle object to encapsulate resources (e.g. CUDA stream, communicator, and
* handles to various CUDA libraries) to run graph algorithms.
* @param graph_view Non-owning graph object.
* @param active_majors_in_row Device vector containing all the vertex id that are processed by
* @param[in] handle RAFT handle object to encapsulate resources (e.g. CUDA stream, communicator,
* and handles to various CUDA libraries) to run graph algorithms.
* @param[in] graph_view Non-owning graph object.
* @param[in] active_majors_in_row Device vector containing all the vertex id that are processed by
* gpus in the column communicator
* @param active_major_gpu_ids Device vector containing the gpu id associated by every vertex
* @param[in] active_major_gpu_ids Device vector containing the gpu id associated by every vertex
* present in active_majors_in_row
* @param edge_index_first Iterator pointing to the first destination index
* @param indices_per_source Number of indices supplied for every source in the range
* @param[in] minor_map Device vector of destination indices (modifiable in-place) corresponding to
* vertex IDs being returned
* @param[in] indices_per_source Number of indices supplied for every source in the range
* [vertex_input_first, vertex_input_last)
* @param global_degree_offset Global degree offset to local adjacency list for every source
* @param[in] global_degree_offset Global degree offset to local adjacency list for every source
* represented by current gpu
* @return A tuple of device vector containing the majors, minors and gpu_ids gathered locally
* @return A tuple of device vector containing the majors, minors, gpu_ids and indices gathered
* locally
*/
template <typename GraphViewType, typename EdgeIndexIterator, typename gpu_t>
template <typename GraphViewType, typename gpu_t>
std::tuple<rmm::device_uvector<typename GraphViewType::vertex_type>,
rmm::device_uvector<typename GraphViewType::vertex_type>,
rmm::device_uvector<gpu_t>>
rmm::device_uvector<gpu_t>,
rmm::device_uvector<typename GraphViewType::edge_type>>
gather_local_edges(
raft::handle_t const& handle,
GraphViewType const& graph_view,
const rmm::device_uvector<typename GraphViewType::vertex_type>& active_majors_in_row,
const rmm::device_uvector<gpu_t>& active_major_gpu_ids,
EdgeIndexIterator edge_index_first,
rmm::device_uvector<typename GraphViewType::edge_type>&& minor_map,
typename GraphViewType::edge_type indices_per_major,
const rmm::device_uvector<typename GraphViewType::edge_type>& global_degree_offsets);

Expand Down
130 changes: 71 additions & 59 deletions cpp/src/sampling/detail/gather_utils_impl.cu
Original file line number Diff line number Diff line change
Expand Up @@ -203,65 +203,77 @@ template std::tuple<
partition_information(raft::handle_t const& handle,
graph_view_t<int64_t, int64_t, double, false, true> const& graph_view);

template std::
tuple<rmm::device_uvector<int32_t>, rmm::device_uvector<int32_t>, rmm::device_uvector<int32_t>>
gather_local_edges(raft::handle_t const& handle,
graph_view_t<int32_t, int32_t, float, false, true> const& graph_view,
const rmm::device_uvector<int32_t>& active_majors_in_row,
const rmm::device_uvector<int32_t>& active_major_gpu_ids,
int32_t const* edge_index_first,
int32_t indices_per_major,
const rmm::device_uvector<int32_t>& global_degree_offsets);

template std::
tuple<rmm::device_uvector<int32_t>, rmm::device_uvector<int32_t>, rmm::device_uvector<int32_t>>
gather_local_edges(raft::handle_t const& handle,
graph_view_t<int32_t, int32_t, double, false, true> const& graph_view,
const rmm::device_uvector<int32_t>& active_majors_in_row,
const rmm::device_uvector<int32_t>& active_major_gpu_ids,
int32_t const* edge_index_first,
int32_t indices_per_major,
const rmm::device_uvector<int32_t>& global_degree_offsets);

template std::
tuple<rmm::device_uvector<int32_t>, rmm::device_uvector<int32_t>, rmm::device_uvector<int32_t>>
gather_local_edges(raft::handle_t const& handle,
graph_view_t<int32_t, int64_t, float, false, true> const& graph_view,
const rmm::device_uvector<int32_t>& active_majors_in_row,
const rmm::device_uvector<int32_t>& active_major_gpu_ids,
int64_t const* edge_index_first,
int64_t indices_per_major,
const rmm::device_uvector<int64_t>& global_degree_offsets);

template std::
tuple<rmm::device_uvector<int32_t>, rmm::device_uvector<int32_t>, rmm::device_uvector<int32_t>>
gather_local_edges(raft::handle_t const& handle,
graph_view_t<int32_t, int64_t, double, false, true> const& graph_view,
const rmm::device_uvector<int32_t>& active_majors_in_row,
const rmm::device_uvector<int32_t>& active_major_gpu_ids,
int64_t const* edge_index_first,
int64_t indices_per_major,
const rmm::device_uvector<int64_t>& global_degree_offsets);

template std::
tuple<rmm::device_uvector<int64_t>, rmm::device_uvector<int64_t>, rmm::device_uvector<int32_t>>
gather_local_edges(raft::handle_t const& handle,
graph_view_t<int64_t, int64_t, float, false, true> const& graph_view,
const rmm::device_uvector<int64_t>& active_majors_in_row,
const rmm::device_uvector<int32_t>& active_major_gpu_ids,
int64_t const* edge_index_first,
int64_t indices_per_major,
const rmm::device_uvector<int64_t>& global_degree_offsets);

template std::
tuple<rmm::device_uvector<int64_t>, rmm::device_uvector<int64_t>, rmm::device_uvector<int32_t>>
gather_local_edges(raft::handle_t const& handle,
graph_view_t<int64_t, int64_t, double, false, true> const& graph_view,
const rmm::device_uvector<int64_t>& active_majors_in_row,
const rmm::device_uvector<int32_t>& active_major_gpu_ids,
int64_t const* edge_index_first,
int64_t indices_per_major,
const rmm::device_uvector<int64_t>& global_degree_offsets);
template std::tuple<rmm::device_uvector<int32_t>,
rmm::device_uvector<int32_t>,
rmm::device_uvector<int32_t>,
rmm::device_uvector<int32_t>>
gather_local_edges(raft::handle_t const& handle,
graph_view_t<int32_t, int32_t, float, false, true> const& graph_view,
const rmm::device_uvector<int32_t>& active_majors_in_row,
const rmm::device_uvector<int32_t>& active_major_gpu_ids,
rmm::device_uvector<int32_t>&& minor_map,
int32_t indices_per_major,
const rmm::device_uvector<int32_t>& global_degree_offsets);

template std::tuple<rmm::device_uvector<int32_t>,
rmm::device_uvector<int32_t>,
rmm::device_uvector<int32_t>,
rmm::device_uvector<int32_t>>
gather_local_edges(raft::handle_t const& handle,
graph_view_t<int32_t, int32_t, double, false, true> const& graph_view,
const rmm::device_uvector<int32_t>& active_majors_in_row,
const rmm::device_uvector<int32_t>& active_major_gpu_ids,
rmm::device_uvector<int32_t>&& minor_map,
int32_t indices_per_major,
const rmm::device_uvector<int32_t>& global_degree_offsets);

template std::tuple<rmm::device_uvector<int32_t>,
rmm::device_uvector<int32_t>,
rmm::device_uvector<int32_t>,
rmm::device_uvector<int64_t>>
gather_local_edges(raft::handle_t const& handle,
graph_view_t<int32_t, int64_t, float, false, true> const& graph_view,
const rmm::device_uvector<int32_t>& active_majors_in_row,
const rmm::device_uvector<int32_t>& active_major_gpu_ids,
rmm::device_uvector<int64_t>&& minor_map,
int64_t indices_per_major,
const rmm::device_uvector<int64_t>& global_degree_offsets);

template std::tuple<rmm::device_uvector<int32_t>,
rmm::device_uvector<int32_t>,
rmm::device_uvector<int32_t>,
rmm::device_uvector<int64_t>>
gather_local_edges(raft::handle_t const& handle,
graph_view_t<int32_t, int64_t, double, false, true> const& graph_view,
const rmm::device_uvector<int32_t>& active_majors_in_row,
const rmm::device_uvector<int32_t>& active_major_gpu_ids,
rmm::device_uvector<int64_t>&& minor_map,
int64_t indices_per_major,
const rmm::device_uvector<int64_t>& global_degree_offsets);

template std::tuple<rmm::device_uvector<int64_t>,
rmm::device_uvector<int64_t>,
rmm::device_uvector<int32_t>,
rmm::device_uvector<int64_t>>
gather_local_edges(raft::handle_t const& handle,
graph_view_t<int64_t, int64_t, float, false, true> const& graph_view,
const rmm::device_uvector<int64_t>& active_majors_in_row,
const rmm::device_uvector<int32_t>& active_major_gpu_ids,
rmm::device_uvector<int64_t>&& minor_map,
int64_t indices_per_major,
const rmm::device_uvector<int64_t>& global_degree_offsets);

template std::tuple<rmm::device_uvector<int64_t>,
rmm::device_uvector<int64_t>,
rmm::device_uvector<int32_t>,
rmm::device_uvector<int64_t>>
gather_local_edges(raft::handle_t const& handle,
graph_view_t<int64_t, int64_t, double, false, true> const& graph_view,
const rmm::device_uvector<int64_t>& active_majors_in_row,
const rmm::device_uvector<int32_t>& active_major_gpu_ids,
rmm::device_uvector<int64_t>&& minor_map,
int64_t indices_per_major,
const rmm::device_uvector<int64_t>& global_degree_offsets);

} // namespace detail

Expand Down
18 changes: 12 additions & 6 deletions cpp/src/sampling/detail/gather_utils_impl.cuh
Original file line number Diff line number Diff line change
Expand Up @@ -376,16 +376,17 @@ partition_information(raft::handle_t const& handle, GraphViewType const& graph_v
std::move(vc_offsets));
}

template <typename GraphViewType, typename EdgeIndexIterator, typename gpu_t>
template <typename GraphViewType, typename gpu_t>
std::tuple<rmm::device_uvector<typename GraphViewType::vertex_type>,
rmm::device_uvector<typename GraphViewType::vertex_type>,
rmm::device_uvector<gpu_t>>
rmm::device_uvector<gpu_t>,
rmm::device_uvector<typename GraphViewType::edge_type>>
gather_local_edges(
raft::handle_t const& handle,
GraphViewType const& graph_view,
const rmm::device_uvector<typename GraphViewType::vertex_type>& active_majors_in_row,
const rmm::device_uvector<gpu_t>& active_major_gpu_ids,
EdgeIndexIterator edge_index_first,
rmm::device_uvector<typename GraphViewType::edge_type>&& minor_map,
typename GraphViewType::edge_type indices_per_major,
const rmm::device_uvector<typename GraphViewType::edge_type>& global_degree_offsets)
{
Expand All @@ -405,7 +406,7 @@ gather_local_edges(
handle.get_thrust_policy(),
thrust::make_counting_iterator<size_t>(0),
thrust::make_counting_iterator<size_t>(edge_count),
[edge_index_first,
[edge_index_first = minor_map.cbegin(),
active_majors = active_majors_in_row.data(),
active_major_gpu_ids = active_major_gpu_ids.data(),
id_begin = id_begin.data(),
Expand Down Expand Up @@ -460,14 +461,16 @@ gather_local_edges(
auto location = location_in_segment + vertex_count_offsets[partition_id];
auto g_degree_offset = global_degree_offsets[location];
auto g_dst_index = edge_index_first[index];

if ((g_dst_index >= g_degree_offset) && (g_dst_index < g_degree_offset + local_out_degree)) {
minors[index] = adjacency_list[g_dst_index - g_degree_offset];
} else {
minors[index] = invalid_vertex_id;
}
});

auto input_iter = thrust::make_zip_iterator(
thrust::make_tuple(majors.begin(), minors.begin(), minor_gpu_ids.begin()));
thrust::make_tuple(majors.begin(), minors.begin(), minor_gpu_ids.begin(), minor_map.begin()));

auto compacted_length = thrust::distance(
input_iter,
Expand All @@ -480,7 +483,10 @@ gather_local_edges(
majors.resize(compacted_length, handle.get_stream());
minors.resize(compacted_length, handle.get_stream());
minor_gpu_ids.resize(compacted_length, handle.get_stream());
return std::make_tuple(std::move(majors), std::move(minors), std::move(minor_gpu_ids));
minor_map.resize(compacted_length, handle.get_stream());

return std::make_tuple(
std::move(majors), std::move(minors), std::move(minor_gpu_ids), std::move(minor_map));
}

} // namespace detail
Expand Down
Loading