You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When computing the Remainder for Decimal types, the existing algorithm can only handle what previous versions of Spark could handle as far as overflow of the Decimal128 operands. This would mean that if an upcast cannot be performed because of limitations of precision, an exception would be thrown (ANSI mode) or the result would be null. Spark 3.4 actually computes this remainder now because it can use the BigDecimal type from JAVA and round from there.
Steps/Code to reproduce bug
Try to compute the remainder between 2 large decimal 128 values:
Describe the bug
When computing the Remainder for Decimal types, the existing algorithm can only handle what previous versions of Spark could handle as far as overflow of the Decimal128 operands. This would mean that if an upcast cannot be performed because of limitations of precision, an exception would be thrown (ANSI mode) or the result would be null. Spark 3.4 actually computes this remainder now because it can use the BigDecimal type from JAVA and round from there.
Steps/Code to reproduce bug
Try to compute the remainder between 2 large decimal 128 values:
PySpark example:
Expected behavior
When running in Spark 3.4, this should return the correct value for
a % b
(in other versions of Spark, it returns null)The text was updated successfully, but these errors were encountered: