-
-
Notifications
You must be signed in to change notification settings - Fork 798
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
try to avoid precision loss #983
Conversation
writeNumber((BigInteger) n); | ||
} else if (n instanceof BigDecimal) { | ||
final BigDecimal bd = (BigDecimal) n; | ||
p.streamReadConstraints().validateBigIntegerScale(bd.scale()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
.... huh?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in the 'failing' test case, we end up with getNumberExact returning this exact case (a BigDecimal)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BigDecimal
makes sense, that validation is what I don't understand (as we are not converting).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, I've removed the validation
writeNumber((Float) n); | ||
} else if (n instanceof BigDecimal) { | ||
final BigDecimal bd = (BigDecimal) n; | ||
p.streamReadConstraints().validateBigIntegerScale(bd.scale()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above, this validation does not make sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, will merge
This works well for the issue (passing!), for But frustratingly there is ONE new test failure for CBOR (in https://github.com/FasterXML/jackson-dataformats-binary/) for "parser.nextTextValue()". I don't think this is necessarily a problem with change here, but might expose some problem CBORParser state keeping. Interestingly enough seems to be related to |
Ah-ha. I think it's StringRef changes from: FasterXML/jackson-dataformats-binary#347 that are the root cause; and maybe changes here simply caused different encoding of number in CBOR when copying ( |
The JsonGenerator change means the number in the broken test will be output differently. The CBOR code writes BigDecimals very differently from how it writes doubles/floats. If needs be, we could hack CBORGenerator to override the new behaviour and to work more like it did before. |
@pjfanning While it is true that handling changes, the bug I see is almost certainly not due to that: it's more a combination of test code and bug (I think) in CBOR for new (in 2.15) StringRef stuff -- and/or handling of So: basically change here means that test code in CBORTestBase creates slightly different CBOR Doc -- instead of |
Just so there's a record in this conversation, the cause was the optimized A separate question is whether it's beneficial to write a |
I am leaning towards reverting this change, due to problems @here-abarany is pointing out. |
Ah ok: So I think reversion is fine. |
This change fixes an issue. If we revert it, what are the plans for the future revisiting of this issue? I prefer correctness over performance. Do we have any test scenarios that can be looked at? Proof of serious perf issues caused by this change. |
@pjfanning There are two parts to this: the original report is quite far removed from the test(s) I added -- so my test is tied to implementation, but does not necessarily prove original issue could not be resolved: we have added more functionality to allow accurate retaining of content. We also have not released the fix yet so it's not a regression in that sense, even if test did recreate problem to fix. The other part is that like I said we can (and should I think) add a new method -- So while I agree that accuracy is, in general, preferred, I think that at this point change of actual functionality is risky and it is better to add new functionality over changes that risk breaking (in some sense) existing usage. |
copyCurrentEvent()
for floats that require greater thandouble
precision #730