Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for computing remainder with Decimal128 operands with more precision on Spark 3.4 [databricks] #8414

Merged

Conversation

NVnavkumar
Copy link
Collaborator

Fixes #8330.

Depends on NVIDIA/spark-rapids-jni#1175 being merge first.

This completes #8161 and adds support for computing Remainder w/ Decimal 128 operands where higher precision is required than fits into the DECIMAL128 value. It calls the custom code from the spark-rapids-jni PR when the needed precision for the operands is greater than DECIMAL128_MAX_PRECISION.

… to compute the remainder when overflow can happen with upcasting

Signed-off-by: Navin Kumar <[email protected]>
@NVnavkumar NVnavkumar requested review from revans2 and razajafri May 26, 2023 16:39
@NVnavkumar NVnavkumar self-assigned this May 26, 2023
@NVnavkumar NVnavkumar added the Spark 3.4+ Spark 3.4+ issues label May 26, 2023
@NVnavkumar NVnavkumar marked this pull request as ready for review May 31, 2023 17:33
@NVnavkumar
Copy link
Collaborator Author

build

@NVnavkumar
Copy link
Collaborator Author

Investigating Databricks compile failure. Looks unrelated to this branch.

@NVnavkumar
Copy link
Collaborator Author

Investigating Databricks compile failure. Looks unrelated to this branch.

Looks like a new build failure with Databricks 11.3. Will file an issue.

@NVnavkumar
Copy link
Collaborator Author

Blocked until #8460 is resolved

@NVnavkumar
Copy link
Collaborator Author

build

1 similar comment
@NVnavkumar
Copy link
Collaborator Author

build

@NVnavkumar
Copy link
Collaborator Author

build

@pytest.mark.parametrize('rhs', [DecimalGen(27,7), DecimalGen(30,10), DecimalGen(38,1), DecimalGen(36,0), DecimalGen(28,-7)], ids=idfn)
def test_mod_mixed_decimal128(lhs, rhs):
assert_gpu_and_cpu_are_equal_collect(
lambda spark : two_col_df(spark, lhs, rhs).selectExpr("a", "b", f"a % b"))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: why the format string? f"a % b"???

@NVnavkumar NVnavkumar merged commit ff8f679 into NVIDIA:branch-23.06 Jun 2, 2023
thirtiseven pushed a commit to thirtiseven/spark-rapids that referenced this pull request Jun 5, 2023
…e precision on Spark 3.4 [databricks] (NVIDIA#8414)

* Add support for longRemainder which uses custom spark-rapids-jni code to compute the remainder when overflow can happen with upcasting

Signed-off-by: Navin Kumar <[email protected]>

* remove commented out line of code.

Signed-off-by: Navin Kumar <[email protected]>

---------

Signed-off-by: Navin Kumar <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Spark 3.4+ Spark 3.4+ issues
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] Handle Decimal128 computation with overflow of Remainder on Spark 3.4
2 participants