Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[type] [cuda] Support bit-level pointer on cuda backend #2065

Merged
merged 14 commits into from
Nov 30, 2020

Conversation

Hanke98
Copy link
Contributor

@Hanke98 Hanke98 commented Nov 27, 2020

Related issue = #1905

In this pr, we want to support bit-level pointer on coda backend.
For now, most test cases passed, but the CustomInt/Float type with 16-bits or 8-bits physical type would fail. The error occurs in JIT module, which I know very little about. @yuanming-hu could you please have a look? I will left the error message and reproducing code in JIRA. Thanks a lot!

[Click here for the format server]


Comment on lines 1200 to 1225
// Compute float(digits) * scale
llvm::Value *cast = nullptr;
auto compute_type = cft->get_compute_type()->as<PrimitiveType>();
if (cft->get_digits_type()->cast<CustomIntType>()->get_is_signed()) {
cast = builder->CreateSIToFP(digits, llvm_type(compute_type));
} else {
cast = builder->CreateUIToFP(digits, llvm_type(compute_type));
}
llvm::Value *s =
llvm::ConstantFP::get(*llvm_context, llvm::APFloat(cft->get_scale()));
s = builder->CreateFPCast(s, llvm_type(compute_type));
auto scaled = builder->CreateFMul(cast, s);
llvm_val[stmt] = scaled;
llvm_val[stmt] = restore_custom_float(digits, val_type);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved to restore_custom_float() to reuse.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reconstruct_custom_float seems a better name

Comment on lines +1170 to 1187
llvm::Value *CodeGenLLVM::extract_custom_int(llvm::Value *physical_value,
llvm::Value *bit_offset,
Type *load_type) {
// bit shifting
// first left shift `physical_type - (offset + num_bits)`
// then right shift `physical_type - num_bits`
auto cit = load_type->as<CustomIntType>();
auto bit_end =
builder->CreateAdd(bit_offset, tlctx->get_constant(cit->get_num_bits()));
auto left = builder->CreateSub(
tlctx->get_constant(data_type_bits(cit->get_physical_type())), bit_end);
auto right = builder->CreateSub(
tlctx->get_constant(data_type_bits(cit->get_physical_type())),
tlctx->get_constant(cit->get_num_bits()));
left = builder->CreateIntCast(left, bit_level_container->getType(), false);
right = builder->CreateIntCast(right, bit_level_container->getType(), false);
auto step1 = builder->CreateShl(bit_level_container, left);
left = builder->CreateIntCast(left, physical_value->getType(), false);
right = builder->CreateIntCast(right, physical_value->getType(), false);
auto step1 = builder->CreateShl(physical_value, left);
llvm::Value *step2 = nullptr;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function is separated from load_as_custom_int with slight changes.

taichi/backends/cuda/codegen_cuda.cpp Show resolved Hide resolved
@Hanke98 Hanke98 marked this pull request as ready for review November 27, 2020 05:44
@Hanke98 Hanke98 changed the title [type][cuda] Support bit-level pointer on cuda backend [type] [cuda] Support bit-level pointer on cuda backend Nov 27, 2020
Copy link
Member

@yuanming-hu yuanming-hu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool! I think we are on the right track. I'm not sure if CUDA is really tested given how the @ti.test function call was written in this PR. I'll take another look after existing issues are addressed :-) Thanks!

tests/python/test_bit_struct.py Outdated Show resolved Hide resolved
tests/python/test_custom_float.py Outdated Show resolved Hide resolved
tests/python/test_bit_struct.py Outdated Show resolved Hide resolved
tests/python/test_bit_array.py Outdated Show resolved Hide resolved
tests/python/test_bit_array.py Outdated Show resolved Hide resolved
Comment on lines 1200 to 1225
// Compute float(digits) * scale
llvm::Value *cast = nullptr;
auto compute_type = cft->get_compute_type()->as<PrimitiveType>();
if (cft->get_digits_type()->cast<CustomIntType>()->get_is_signed()) {
cast = builder->CreateSIToFP(digits, llvm_type(compute_type));
} else {
cast = builder->CreateUIToFP(digits, llvm_type(compute_type));
}
llvm::Value *s =
llvm::ConstantFP::get(*llvm_context, llvm::APFloat(cft->get_scale()));
s = builder->CreateFPCast(s, llvm_type(compute_type));
auto scaled = builder->CreateFMul(cast, s);
llvm_val[stmt] = scaled;
llvm_val[stmt] = restore_custom_float(digits, val_type);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reconstruct_custom_float seems a better name

taichi/codegen/codegen_llvm.cpp Outdated Show resolved Hide resolved
taichi/backends/cuda/codegen_cuda.cpp Outdated Show resolved Hide resolved
@Hanke98 Hanke98 requested a review from yuanming-hu November 28, 2020 10:57
Copy link
Member

@yuanming-hu yuanming-hu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! Thank you. I left one very minor comment on runtime casting safety.

taichi/backends/cuda/codegen_cuda.cpp Outdated Show resolved Hide resolved
@Hanke98 Hanke98 merged commit 1c8d10d into taichi-dev:master Nov 30, 2020
@Hanke98 Hanke98 deleted the support-bit-pointer-on-cuda branch November 30, 2020 03:26
@yuanming-hu yuanming-hu mentioned this pull request Nov 30, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants