-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
throw error "SyntaxError: Parenthesis ) expected (char 102)" #1485
Comments
Hi, I tried running this version of your example (https://runkit.com/harrysarson/mathjs-1485): const math = require('mathjs');
const form="bignumber((hs1-hg1)/(bignumber(hg1-hm1)))";
const hs1=40;
const hg1=38.6;
const hm1=20;
const data = { hs1, hg1, hm1 };
const expression = math.eval(form, data); I got this error:
Which seems reasonable to me. Could you share how you got the error you are reporting? |
I just checked, and the behavior now remains exactly as documented in the last post from 4 and a half years ago. But I have to disagree. It does not make any sense for mathjs to refuse to convert a number to a bignumber because it has "too many significant digits" when those digits were only fabricated from roundoff error in the subtraction (of |
A few thoughts:
|
Hmm, as to these points:
|
About (1) and (2): Sorry for the confusion about "irrational" numbers. I mean "a number of which we know it contains round-off errors already". Let me try to explain this better. My main concern here is to prevent people from thinking they are working at high precision whilst there is a part of the expression executed with a low precision. My concern is not about internal representation of binary vs decimal, but end users mixing // in the following case mathjs gives an annoying but helpful error:
math.config({number: 'number'})
console.log(math.evaluate('sin(pi/x)', { x: math.bignumber(2) }).toString())
// TypeError: Cannot implicitly convert a number with >15 significant digits to BigNumber (value: 3.141592653589793)
// this is to prevent against the following case:
math.config({number: 'number'})
console.log(math.evaluate('sin(bignumber(pi)/x)', { x: math.bignumber(2) }).toString())
// BigNumber 0.9999999999999999999999999999999928919459638323580402641578465819
// Whoops! Actual precision is NOT around 64 digits because we convert an ~15 digit version of pi to BigNumber
// if we do all at high precision we're good to go:
math.config({number: 'BigNumber'})
console.log(math.evaluate('sin(pi/x)', { x: math.bignumber(2) }).toString())
// BigNumber 1
// Ahhh, better :) When working with BigNumbers, mathjs is configured by default to work with 64 digits precision. When the output is a BigNumber, people will think the result has a precision in the order of 64 digits. But if you put a |
Yes, I understand the point that we don't want to inject values with roundoff errors of 1 part in 10^14 or so into calculations with precision of 1 part in 10^64 or so, and I completely agree with that point. Nevertheless, there is something mathematically irritating that it will be fine to do The whole question is what values should we consider as exact, or more importantly, what values does the person using the library consider as exact? Those we should convert into bignumber automatically, using their exact value. So certainly we consider all integers as exact (we already do). And I suspect the original poster here was considering numbers with just a small number of decimal digits past the point like 38.6 as exact, and so the difference of two of them, like 40.0-38.6 = 1.4 should be exact. And small-number ratios should be considered exact, like 1/4 or 44/7, etc. So the question becomes, how to recognize those "exact" values? I think that mathematical theory provides an extremely good and relatively easily implementable answer: JavaScript number entities that are within (say) 1 part in 10^14 or 10^15 of their nearest rational approximation that has a denominator less than or equal to (say) 2^10 = 1024. That 11-orders-of-magnitude difference between the size of the denominator and the closeness of approximation means that it's too big a coincidence to be accidental, and the person doing the computation meant to use the exact value, but was just expressing it in a convenient way that happened to induce roundoff error in IEEE arithmetic. So the exact algorithm for auto-converting a number to a bignumber would be: compute the best rational approximation to the number with a denominator less than or equal to N (maybe N = 1024, maybe it's configurable; and I think Fraction can already do this -- anyhow it's easy with continued fractions), and then check if that rational is within epsilon (maybe use config.epsilon, or maybe something closer to IEEE precision) of the given number, and if so use the bignumber version of that exact rational, otherwise refuse to auto-convert. That would make the original poster's code work as expected, the numerator would be converted to the exact bignumber("1.4"), and it would make 1/3, 1/4, 1/7, etc all happily auto-convertable to bignumber, while preserving the helpful error you point out in your recent examples. I think this conversion method actually gets at the spirit of what mathjs is currently trying to do with the 15-digit limit better than that limit does, while allowing more actually useful cases of auto-conversion. Just writing this as a suggestion to consider. |
Ahh, that sounds really interesting. This idea is new to me, but I'm definitely for improving conversion function from |
Here's a relevant stack overflow answer: https://stackoverflow.com/a/4266999 The idea would be to call |
I think I more or less understand the idea. I'm in doubt though that it will work in all cases, and whether it would accidentally apply rounding to a value that shouldn't. I would like a conservative approach in that regard. I was thinking: I can visually very easily see whether I'm seeing a round-off error: when it has more than 15 digits, and contains a series of zeros or nines followed by an other digit.
Can't we simply use that knowledge? Feels to me like a safer approach. I did some fiddling and created PR #3085. What do you think about that? |
Confused. 0.333333333333333 is "exactly" 1/3 as far as doubles are concerned, so it should be converted to bignumber(1)/bignumber(3). Same with 0.666666666666667 and 2/3, or 0.142857242857143 and 1/7. There's no difference between these cases and 1/5 = 0.2000000000000001 (say) -- they are all equally well (or not well) approximated in IEEE doubles, because the denominators are all relatively prime to 2, the base of IEEE internal representation. They just "look" different to your decimal-trained eye. You are no more accurately detecting round-off error with a decimal-digit-based algorithm than with a continued-fraction-based algorithm. You're just missing lots of other "exact" values that are at least as well justified for automatically converting to bignumber. And the risk of "false positives" is no more (or less) with the continued fraction approach than with the decimal-digit-based one. 0.3333.... is just as "exact" for 1/3 as 0.3000000...4 is for 3/10. As I said, it's rare for a rational to be within 1/(denom)^2 of a real, so the coincidence of it being within 1/(denom)^4 is so unlikely that we can treat it as exact -- which is all you're really doing with with the "digit pattern" heuristic, except you're only detecting a very small portion of the cases. In other words, a computation whose actual "exact" value is 0.300000000001 might come out to 0.300000000000004 and you would convert it to "exact" 0.3 and be wrong. If you are thinking "well, but that's so unlikely that we don't have to worry about it" -- in fact, I agree! The point is all the other cases the continued fraction algorithm detects are just as unlikely to be wrong, plus it gets all of the cases you can "see" should be treated as roundoff error with no further work, and all on a solid mathematical basis, to boot. So I would not recommend pursuing #3085. |
P.S. When one uses the continued fraction approach, one has two "knobs" that let you control precisely how conservative the algorithm is: the "tolerance" which could be config.epsilon, and the maximum denominator you will detect. If you use DBL_EPSILON, you will basically be saying that you will only accept approximations that are off by no more than one binary bit in the least significant position. (So I am fine with going slightly fuzzier than that, even up to config.epsilon, as those tiny errors can accumulate a little via arithmetic operations, but if you want to be super-conservative we could leave it at DBL_EPSILON). The maximum denominator then controls the "probability of coincidence". With a max denominator of 1024, the chances of a coincidence are really miniscule, but for example 0.3647 would never be treated as exact, since its closest rational is 3647/10000. As I said, there is enough accuracy in doubles that I would also be totally comfortable with say 16384 as the max denominator, which would treat 0.3647 as exact. If we want to stay really safe, then with a uniform distribution on the "intended" numbers, I don't see how we can treat any five-decimal numbers as exact, like "0.48977". On the other hand, it's probably true that there is a strong bias in the "intended" numbers toward exact fractions with denominators of the form 10^n, given the realities of human usage of the decimal system. If we are comfortable using that bias, we could accept all rational approximations that are within epsilon of the value to be converted, with a denominator that is either less than 1024, or that happens to be 2^m*5^n where n is (say) 8 or less. (We need to allow this form because if the decimal part happens to be even, some of the 2s in the denominator of 10^n will cancel.) So bottom line, I think the continued-fraction algorithm will treat the most cases well, and gives us very close control over details concerning which numbers will be "recognized" and how conservative we are being in the conversion. |
I'm indeed not sure if this #3085 is a good approach. I love to try out the fraction approach, I really want to improve on this conversion to BigNumber! At this point I'm not fully seeing how the tolerance of the fraction approach works out and what cases will "slip through", but it sounds promising. Anyone interested in working out a PR? That will clear things up for me I think. |
I am throwing
SyntaxError: Parenthesis )expected (char 102)
when using math.eval, And I found that when calculating the same expression, the scope variable parameter is 'complex' when it is reported, the 'simple' is normal; for example: "1.5/18.5" is calculated normally to get '0.081081081', "1.39/ 18.6" is reported. The specific situation is as follows:Edited: to make formatting clearer :)
The text was updated successfully, but these errors were encountered: