-
-
Notifications
You must be signed in to change notification settings - Fork 6.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loss of precision when serializing <double> #360
Comments
Do you have an example to check? |
digits10 max_digits10 |
Ok lets see. This code was run on a Mac.
#include <string>
#include <sstream>
#include <iostream>
#include <limits>
int main(int argc,char** argv)
{
double v = 100000000000.1236;
int p1 = std::numeric_limits<double>::digits10;
int p2 = std::numeric_limits<double>::max_digits10;
// stream with precision == digits10
std::stringstream ss;
ss.precision(p1);
ss << v;
std::cout << "digits10 " << p1 << ": " << ss.str() << std::endl;
// stream with precision == max_digits10
std::stringstream ss2;
ss2.precision(p2);
ss2 << v;
std::cout << "max_digits10 " << p2 << ": " << ss2.str() << std::endl;
// Read back and compare with original
double v1,v2;
ss >> v1;
ss2 >> v2;
std::cout << "v==v1 : " << ((v==v1)?"true":"false") << std::endl;
std::cout << "v==v2 : " << ((v==v2)?"true":"false") << std::endl;
} output:
It is not easy with floating points and precision but this tells me that the streaming seems more correct when using "max_digits10" |
It also gives the same result/output on a Ubuntu system: uname -a |
This is not a bug, but a reality of dealing with floating-point representation. |
@TurpentineDistillery However using |
This issue can be closed, right? |
Yes I guess so.
/mb
…Sent from my iPhone
On 24 Nov 2016, at 20:46, Niels Lohmann <[email protected]<mailto:[email protected]>> wrote:
This issue can be closed, right?
-
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<#360 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AAq6oDYqiPZd1bPmNpaTZ-AvNPG0bUaLks5rBemYgaJpZM4K1d9R>.
|
Thanks for the quick response! |
I don't see how this is not a bug. The OP's example exactly illustrates the problem of value -> string -> value serialization/deserialization. We need to store double precision data in a json string form and read it back without loss of precision and have run into exactly the same problem. |
What would you propose? |
Is there a problem if you just switch to using max_digits10 for both dump() and operator<< ? |
Then numbers like 2312.42 would be round-tripped to 2312.4200000000001. |
I just ran into the same problem. In my opinion, the Note that currently So I think that |
See #360 (comment). |
@nlohmann My comment actually supports using
I was pointing out that this is true, but only for strings that were written by a value->string conversion not using the full precisions, or were written by hand. As such, I think it's fine for those values to not be preserved exactly. |
I think @gregmarr is saying the same. My expectation is that if I have a |
I think one reason for the status quo were the roundtrip results of https://github.com/miloyip/nativejson-benchmark. I'm not sure whether there exists the one right solution, so we need to make a decision. |
I have an implementation of the Grisu2 algorithm for printing floating-point numbers, based on the reference implementation by Florian Loitsch. It works for IEEE |
I just hit this issue. I store unit test data in JSON and a new unit test is failing because of this loss of precision. Is there any reason why std::setprecision shouldn't work on an ostream I'm passing a json object into? |
It doesn't use the ostream formatting for floating point numbers. If you change these to |
I just ran into the same issue as @gregmarr described and switching to max_digits10 seems to work
|
Hi all. I shall change I lot of test cases fail. It seems that they focus on the string->number->string case:
It would be great if you could have a look at these tests and tell me why it's OK to change or ignore them. |
A string is higher precision than a double (e.g. the former can represent 1.2345 exactly; the latter cannot), so converting from string -> double -> string can lead to a change in value, whereas double -> string -> double should not. For this reason, it's not clear to me why you would have exact tests on the former; they should have a tolerance. |
I think several of these came from an external benchmark that valued the "load and resave a JSON file with exact values" benchmark. I agree that those are not necessarily something that we should care about. |
The roundtrip tests (string -> JSON -> string) come from here: https://github.com/miloyip/nativejson-benchmark/tree/master/data/roundtrip |
Another strange behavior happening with serialization. Here I serialized a double with the DBL_MAX value (1.79769e+308). The resulting string value becomes larger than DBL_MAX and cannot be parsed back. (I post this here as it seems to be related.)
This results in:
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Reopened to check whether #915 fixed this issue. |
The example from #360 (comment) works now and outputs |
The roundtrips from #360 (comment) work. |
Roundtripping 2312.42 (#360 (comment)) works now. |
Roundtripping 100000000000.1236 (#360 (comment)) works now. |
This issue still seems to be there? Here is some code that reproduces the issue:
Git SHA da81e7b Is seems as if this calls to std::strtod in lexer.hpp is the problem
|
The library stores floating point numbers as The number 21898.99 will be stored as the |
@nlohmann Danke Schoen! |
I learnt something today too, thanks for sharing |
It seems that precision is lost when serializing a double. I cannot say why since
std::numeric_limits::digits10 should provide enough digits !?
but if I change that to:
std::numeric_limits::digits10 then I'm not loosing anything.
I'm NOT using and "long double" types. Only "double".
The text was updated successfully, but these errors were encountered: