Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW] Renumber vertices #38

Merged
merged 6 commits into from
Jan 30, 2019
Merged

[REVIEW] Renumber vertices #38

merged 6 commits into from
Jan 30, 2019

Conversation

ChuckHastings
Copy link
Collaborator

@ChuckHastings ChuckHastings commented Jan 14, 2019

Created function to renumber source and dest vertices for a graph. Addresses #27

Takes an array of src vertices and dst vertices (assumed to be integer types) and generates a densely packed version of the vertex arrays and a map translating the new vertex ids back to the original vertex ids.

@GPUtester
Copy link
Contributor

Can one of the admins verify this patch?

@kkraus14
Copy link
Contributor

add to whitelist

dantegd pushed a commit to dantegd/cugraph that referenced this pull request Jan 18, 2019
[REVIEW] Fix: Removed cuda_free of unused variable in tsvd_test.cu
@afender afender self-requested a review January 23, 2019 17:02
@afender
Copy link
Member

afender commented Jan 23, 2019

  1. Would it be a lot of work to have template <typename T_in, typename T_out> in renumber.cuh:107? Since all our algorithms currently only support 32bits input that would allow loading/converting 64bit input.
  2. It seems to me that 32 threadPerBlock and threadBlock lead to underoccupancy issues. Can you double check these numbers? Unless I’m missing something, 32 threadPerBlock covers only 1 warp, which means warps are not overlapped within an SM and 32 threadBlock covers only the first 32 SM.

@ChuckHastings
Copy link
Collaborator Author

  • Would it be a lot of work to have template <typename T_in, typename T_out> in renumber.cuh:107? Since all our algorithms currently only support 32bits input that would allow loading/converting 64bit input.

Not a lot of work. I actually pondered that but didn't want to make it too complex to start. But that's an easy use case to justify. I think it's just a simple editing job to work through which things would be T_in and which would be T_out.

  • It seems to me that 32 threadPerBlock and threadBlock lead to underoccupancy issues. Can you double check these numbers? Unless I’m missing something, 32 threadPerBlock covers only 1 warp, which means warps are not overlapped within an SM and 32 threadBlock covers only the first 32 SM.

I should have added a comment about those being for testing purposes. I was thinking that those parameters should not be constants. I wasn't sure if we wanted to expose them as parameters or have some heuristic computation based on the number of edges in the graph.

@afender
Copy link
Member

afender commented Jan 23, 2019

Not a lot of work. I actually pondered that but didn't want to make it too complex to start. But that's an easy use case to justify. I think it's just a simple editing job to work through which things would be T_in and which would be T_out.

Let's do it then. I think this is useful.

I should have added a comment about those being for testing purposes. I was thinking that those parameters should not be constants. I wasn't sure if we wanted to expose them as parameters or have some heuristic computation based on the number of edges in the graph.

Perhaps something like this?
int nthreads = min(e,CUDA_MAX_KERNEL_THREADS);
int nblocks = min((e + nthreads - 1)/nthreads,CUDA_MAX_BLOCKS);
(see graph_utils.cuh)

@afender
Copy link
Member

afender commented Jan 23, 2019

In the renumbered output, should we test that min(src U dest) = 0 and max(src U dest) = V-1 ?

@ChuckHastings ChuckHastings changed the title [WIP] First cut at code to renumber vertices [REVIEW] Renumber vertices Jan 30, 2019
@afender afender merged commit 44919aa into rapidsai:master Jan 30, 2019
BradReesWork pushed a commit that referenced this pull request Apr 10, 2020
ChuckHastings pushed a commit to ChuckHastings/cugraph that referenced this pull request Oct 26, 2020
[REVIEW] removing cudart_utils include
@kingmesal kingmesal mentioned this pull request Feb 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants