-
-
Notifications
You must be signed in to change notification settings - Fork 6.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bugs in miloyip/nativejson-benchmark: roundtrips #187
Comments
I wrote a little tool for the roundtrips: #include <json.hpp>
using nlohmann::json;
int main()
{
json j;
std::cin >> j;
std::cout << j << std::endl;
} Here are the results:
Indeed, the output differs from the input. |
The first two items ( |
FYI: The roundtrips are already part of the unit tests (test case ""compliance tests from nativejson-benchmark", section "roundtrip"), where test case 13, 18-21, and 24-27 are skipped. |
Most of these are silly tests in my opinion and seem pretty contrived. The exception is -0. There is an argument for saying that -0 should remain a float. This is because while 0 = -0, 1/0 != 1/-0 (LHS is positive infinity and RHS is negative infinity). The 0 case is not so much of a problem because if the user expects a float then it will be cast to 0.0, however -0 would be stored as 0 and cast to 0.0. In rare cases this could cause unexpected behaviour. Here is what causes them to break: roundtrip18.json (1234567890123456789) - as above except that it will be stored as a uint64_t in version 2.0.0. roundtrip19.json (9223372036854775807) - as above. roundtrip20.json (0.0) - cast to integer so ends up dumping as 0 not 0.0. This could be fixed by dropping the cast to integer. In 2.0.0 an attempt is made to parse as integers first so dropping the cast would only affect floating point numbers that are expressed in the form of floating point numbers but are actually integers e.g 1.00, 123e4 etc. It could be argued that if someone has gone the effort of writing a number in the form 1.00 then it is probably intended to be a float so this might be the right way to go anyway. Personally I would like to see us go down that track. roundtrip21.json (-0.0) - similar to above. A special case would be needed to fix it. roundtrip24.json (5e-324) - this number is right at the limit of the minimum subnormal range (for roundtrip25.json (2.225073858507201e-308) - this number is below the minimum normal range so is represented using the subnormal range with loss of precision (it is the loss of precision that causes the error in this case). A GCC roundtrip26.json (2.2250738585072014e-308) - this number is exactly the minimum normal range so can be represented exactly in a roundtrip27.json (1.7976931348623157e308) - as above. |
Update:
|
I reran the benchmark: Fixed:
New errors:
Unchanged:
|
|
Actually I don't think these are new - or at least they shouldn't be. I couldn't get the benchmark to successfully run so I uncommented the above tests in The reason the bottom one now has a trailing To summarise where all of these sit at the moment: in 1.0.1 roundtrip21 is fixed (and the relevant unit test uncommented) but everything else is broken, while in 2.0.0 rountrip13, rountrip18, roundtrip19, and roundtrip20 are all fixed (and the relevant unit tests uncommented). roundtrip24, roundtrip25 and roundtrip27 are (in my opinion) fundamentally unable to be passed by any library that relies on roundtrip26 could be made to pass but the difficulty is that it requires 17 digit precision and you can't guarantee roundtrips for all numbers if you output at 17 digits. It would be necessary to somehow identify which numbers can support roundtrips with 17 digits and this would be very difficult to do efficiently I think. The worst case scenario is that you would have to do trial round trips at three different levels of precision (15, 16 and 17) and that would only work for |
Now all but roundtrip 24-27 work. I'll rerun the benchmarks just to make sure it also works in the setting of https://github.com/miloyip/nativejson-benchmark. |
Well I have achieved 100% 'conformance'. I will explain further but I am just running some more checks. This benchmark doesn't seem that fair however as 'conformance' is not measured against any official standard but is really based on how close the output is to 'RapidJSON_FullPrec', so it is inherently biased, especially around floating point where some variation out to be permitted. As an example, outputting a floating point number in this form will fail: |
@twelsby Well, it seems as if the benchmark started from the testbench for RapidJSON. I don't think achieving 100% "just because" should not be the goal here - I'd rather want to understand why quite some library seem to "agree" on the output. |
I have only looked at rapidJSON so far but the short answer is essentially that it uses its own custom functions to do both parsing and stringifying of doubles that do not rely on the standard library. The stringifying function used by rapidJSON simply does not emit a '+' for a positive exponent (unlike the standard library). As including the '+' sign is optional and therefore legal there will always be roundtrip tests that a particular implementation will fail (almost - I'll get to that). For example, rapidJSON will fail round trip tests of This is what causes the failure for The other failing roundtrips pass with rapidJSON because it uses a complex algorithm to make the decision as to how many significant figures to round the output to. The simplest example of this is This approach is fundamentally flawed however and produces its own error that, perhaps conveniently, isn't tested for. The error occurs because You can test this by putting any of these values into a file and adding it to the round trip tests: The solution is obviously to not lose the information to begin with. Both problems can be fixed by determining the number of significant figures, the capitalization of 'e' and the presence of a '+' sign in the exponent during the parse, storing that information and then using it during the dump. This is only relevant to round trips. If you create a new double object and dump it then the normal 15 max significant figures can be used. I have developed an implementation of this that actually runs faster than the previous proposed 2.0.0 implementation. It will pass all of the benchmark roundtrips and the additional ones I mentioned. There are some cases involving extra leading or trailing zeros in either the mantissa or exponent that won't pass however. It is slightly slower than 1.0.1 in the parsing benchmark but only because the test is heavily weighted towards 'canada.json' (due to the time taken to process this file) and this file is pretty much all floating point numbers. It is faster than 1.0.1 with files that have a greater proportion of integers and strings (most real world examples). Stringifying is significantly faster for doubles so the 'canada.json' test is stringified 50% faster with this method. This seems to be because I am using Memory usage is exactly the same provided single byte struct/class alignment isn't enabled (if it is then a small amount of extra memory is used). It does add some code (only about fifty lines). My testing was under VS2015 as I couldn't get the tests to build under Linux. |
Hi @twelsby, thanks (again) for the detailed analysis. I think it does not make sense to "fix" the code toward these tests any more. Instead, we should start a discussion about the benchmark, and maybe adding all kind of valid JSON examples which show that it is hard to find "the" correct JSON serialization. I shall play around with some examples and open a PR at miloyip/nativejson-benchmark. I am excited about a performance improvements. However, I think we should track this in a different issue. If I can help with test on Linux or OSX, please let me know. |
@nlohmann, when I first began investigating this issue it appeared that the only way to pass those tests that depend on particular (arbitrary) floating point representations was to output numbers in the expected form (e.g. As I progressed I came up with a better way that simply echos the floating point representation given in the original string (by recording it during the parse). This does not target any particular test and would work for all. This is something I am advocating we incorporate because it adds functionality that is useful in certain circumstances and has effectively zero cost. Take the following code: #include "json.hpp"
using nlohmann::json;
int main()
{
std::cout << "Enter a floating point number: ";
std::string input;
std::getline(std::cin,input);
json j = json::parse(input);
std::cout << "Was your number " << j << "?" << std::endl;
return 0;
} Under 2.0.0 and 1.1.0 if you enter a floating point number that cannot be represented exactly as a binary floating point number and which has less than 15 significant figures you will get a different output. The classic case is
With my proposed changes you get:
Now the underlying floating point number is the same in either case, so if the program is doing floating point calculations then it doesn't make any difference, but if the program is interacting with a user, then it is much better to present the rounded version. My changes also match use of '+' and capitalization of 'e'. In my view these are nowhere as important as matching precision but since they can be implemented at effectively zero cost it probably makes sense to do so. The other advantage is that you get to claim class leading conformance that (if you add the additional round trip tests I mentioned) is better than any other library in the test. I am putting together a pull request so you can see. |
I opened an issue at miloyip/nativejson-benchmark#33 and hope for a nice discussion. I close this ticket here as we have #201 to discuss. |
@twelsby: FYI, at miloyip/nativejson-benchmark#33 floating-point numbers and in particular subnormal doubles are discussed. |
I checked the library with the latest https://github.com/miloyip/nativejson-benchmark benchmark suite. There are several errors in the round trip tests:
These are the values:
[0.0]
[-0.0]
[5e-324]
[2.225073858507201e-308]
[2.2250738585072014e-308]
[1.7976931348623157e308]
I will have a deeper look at this later. I just wanted to paste the results here before I forget.
The text was updated successfully, but these errors were encountered: