Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refine for_range #8152

Closed

Conversation

chengduoZH
Copy link
Contributor

@chengduoZH chengduoZH commented Feb 5, 2018

fix #7081

Don't merge. Need to further discuss and analyze.

if (block_size < 1024) {
int size = 1;
while (size < block_size) size <<= 1;
block_size = size;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. The warp size is 32 in Nvidia GPU, if limit_ is less than 1024, make the block size be divided by 32 by the following calculation:
if (block_size < 1024) {
    block_size = ((blokc_size + 31) >> 5) << 5;
}
  1. The following function from line 44 to line 48 also needs to be updated.
template <typename Function>
__global__ static void ForRangeElemwiseOpGridIsOne(Function func) {
  size_t idx = static_cast<size_t>(threadIdx.x);
  func(idx);
}

Copy link
Collaborator

@reyoung reyoung Feb 6, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure this optimization is necessary.

If the number of thread cannot be divided by wrap size, there will be inactive threads. However, even we make the number of thread can be divided by wrap size, there will be inactive threads, too. Because all threads run the same code, there will be some thread not to fulfil the if statement and wait for other threads.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@qingqing01 If so, ForRangeElemwiseOpGridIsOne should be removed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@reyoung It seems reasonable, but I have not seen any document directing uses to set block_size in this way.
I don't do benchmark, so I don't tell you which is better.

@reyoung
Copy link
Collaborator

reyoung commented Feb 12, 2018

I do not think this PR is useful, considering that it is not saving any computation and SMs in CUDA.

@chengduoZH chengduoZH closed this Jul 24, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Some question about cuda thread size.
4 participants