-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scale/Precision error with decimal literal between -1 and 1 (0 left of decimal point) #8413
Comments
Natea was able to confirm, we'll pick it up in a fix-it-week |
I am taking a look at this with some help from @nae701. The root cause seems to be that BigDecimal's precision is not defined exactly the same way as SQL Decimal's precision. TL;DR: can we treat "0.12" and ".12" the same way and return precision=3 for both instead of precision=2 for the latter? (this requires changing an existing unit test) For SQL Decimal, precision is just the total number of digits; and scale is the number of digits to the right of the period. So scale has to be smaller or equal to precision; and the scale cannot be negative. So for "0.005", we expect to get (precision=4, scale=3). For ".005", we expect to get (precision=3, scale=3). BigDecimal has a broader definition of precision and scale. It has the same definition for scale if the scale is zero or positive and for precision if the decimal is <= -1 or >= 1. But if the decimal is in (-1, 1), precision can be smaller than scale. For example, for both "0.005" and ".005", it returns (precision=1, scale=3). The fix seems straightforward to implement if we know the desired behavior: when we do the conversion, we can check if the decimal is within (-1, 1) and treat it differently. The code is here:
But I am not super-sure what the expected behavior is: Specifically: If we convert to BigDecimal first as we do now, we lose the ability to distinguish between "0.005" and ".005"; is it okay in this case to always return the higher precision (4 rather than 3)? (This would break an existing unit test which expects precision = 2 for ".12" here:
Another question that's not in the critical path: In the zero case right now, we retain the BigDecimal's setScale value rather than always return (precision=1, scale=0) -- is there a rationale for doing this? The test is here:
|
@maheshba thanks for digging into this! I think we should stick with the SQL decimal definition, and precision=3 for both 0.12 and .12. I think it is fine to lose the ability to distinguish between 0.005 and .005. And we can update the unit tests appropriately. And for the zero case, the rationale is to stick with the SQL standard, which does differentiate between 0.0 and 0.00 for example. Once you put up the PR we can also get feedback from others if they have differing thoughts though. |
Describe the bug
A scale/precision error occurs when using a decimal literal between -1 and 1 in a ksqlDB query - in other words, when the value left of the decimal point is 0. For example, comparing a field to the literal
0.005
in a query will result in the following error:Steps to reproduce the behavior:
Version: 0.22.0 and CP 7.0.0
Expected behavior
No errors, query result returned.
Actual behaviour
Error message:
Additional context
Decimals < -1 or > 1 have no issues. For example, the following works:
This also works:
There was a fix a while back for a similar issue: #5531
The text was updated successfully, but these errors were encountered: