"""
Having fun with numerical precision/errors, a solution of the quadratic equation [a*x**2 + b*x + c == 0] with small values for the constant a is given an unexpected result when comparing SciPy and SymPy numerical results, which I hope an earthly kind soul could explain. The output is as follows (the bothering result is on the second root property): SciPy a*x1**2 + b*x1 + c = 16777217.0 a*x2**2 + b*x2 + c = 1.11022302463e-16 Root properties: (x1 + x2) + b/a = 0.0 x1*x2 - c/a = 0.0 ---- SymPy a*x1**2 + b*x1 + c = 16777217.0000000 a*x2**2 + b*x2 + c = 0 Root properties: (x1 + x2) + b/a = 0 x1*x2 - c/a = 0.125000000000000 """ import scipy def myprint(a, b, c, x1, x2): print('a*x1**2 + b*x1 + c = '), print(a*x1**2 + b*x1 + c) print('a*x2**2 + b*x2 + c = '), print(a*x2**2 + b*x2 + c) print("\t Root properties: ") print('(x1 + x2) + b/a = '), print((x1+x2) + float(b)/float(a)) print(' x1*x2 - c/a = '), print(x1*x2 - float(c)/float(a)) a = 1.0e-15 b = 10000.0 c = 1.0 coeff = [a, b, c] x1, x2 = scipy.roots(coeff) print("\t SciPy ") myprint(a, b, c, x1, x2) from sympy import * var('x') x1, x2 = solve(a*x**2 + b*x + c, x) print(" \t ---- \n \t SymPy ") myprint(a, b, c, x1, x2) # Thanks in advance, # Sergio _______________________________________________ SciPy-User mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/scipy-user |
On Wed, Nov 4, 2015 at 12:58 PM, Sergio Rojas <[hidden email]> wrote:
> """ > Having fun with numerical precision/errors, a solution of the quadratic > equation [a*x**2 + b*x + c == 0] with small values for the > constant a is given an unexpected result when comparing > SciPy and SymPy numerical results, which I hope an earthly kind soul could > explain. The output is as follows (the bothering result > is on the second root property): > > SciPy > a*x1**2 + b*x1 + c = 16777217.0 > a*x2**2 + b*x2 + c = 1.11022302463e-16 > Root properties: > (x1 + x2) + b/a = 0.0 > x1*x2 - c/a = 0.0 > ---- > SymPy > a*x1**2 + b*x1 + c = 16777217.0000000 > a*x2**2 + b*x2 + c = 0 > Root properties: > (x1 + x2) + b/a = 0 > x1*x2 - c/a = 0.125000000000000 > """ > > import scipy > > def myprint(a, b, c, x1, x2): > print('a*x1**2 + b*x1 + c = '), > print(a*x1**2 + b*x1 + c) > print('a*x2**2 + b*x2 + c = '), > print(a*x2**2 + b*x2 + c) > print("\t Root properties: ") > print('(x1 + x2) + b/a = '), > print((x1+x2) + float(b)/float(a)) > print(' x1*x2 - c/a = '), > print(x1*x2 - float(c)/float(a)) > > a = 1.0e-15 > b = 10000.0 > c = 1.0 > > coeff = [a, b, c] > x1, x2 = scipy.roots(coeff) > > print("\t SciPy ") > myprint(a, b, c, x1, x2) > > from sympy import * > var('x') > x1, x2 = solve(a*x**2 + b*x + c, x) > > print(" \t ---- \n \t SymPy ") > myprint(a, b, c, x1, x2) > > # Thanks in advance, > # Sergio > _______________________________________________ > SciPy-User mailing list > [hidden email] > https://mail.scipy.org/mailman/listinfo/scipy-user This is a catastrophic loss of precision. For a way out, see, for example, http://math.stackexchange.com/questions/866331/how-to-implement-a-numerically-stable-solution-of-a-quadratic-equation Demo: In [1]: a, b, c = 1e-15, 1e4, 1.0 In [2]: D = b**2 - 4.*a*c In [3]: import numpy as np In [4]: 2.*c / (-b - np.sqrt(D)) Out[4]: -0.0001 In [5]: import sympy as sy In [6]: x = sy.var('x') In [7]: sy.solve(a*x**2 + b*x + c) Out[7]: [-1.00000000000000e+19, -0.000100000000000000] _______________________________________________ SciPy-User mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Sergio Rojas
Minor comment: scipy.roots is actually numpy.roots. Not that it really matters, unless you want to look at the source. The right way of evaluating polynomials, specially when you have so wild values, is: print((a * x1 + b) * x1 + c) print((a * x2 + b) * x2 + c) Then, the values are 1 and 1.1 e-16 for SciPy and 1 and 0 for Sympy, so one root is good and the other is close. If you look at the values of the roots you will find that they are the same, -1e+19 -0.0001. One of the solutions gives you 1 because (a * x1 + b) * x1 when x is as large as -1e+19 is a numerical 0, and you don't have the accuracy to get it to -1: x1b = np.nextafter(x1, 2*x1) # Get the next floating point number towards -inf print((a * x1b + b) * x1b + c) 36379789.070917137 x1b = np.nextafter(x1, 0) # Let's see from the other side: print((a * x1b + b) * x1b + c) -18189893.035458561 So, essentially, you have the closest floating point number to the real root. If you need to work with these values, you should first normalise the polynomial such as a=1. If you need extra robustness, the classical advice is to fall back to the explicit formula, solving only the root that has the same sign as -b, and obtaining the other one from x1/x2 = -c/a. But as I said, in this case, it won't help because there is no better representation. It is possible that Numpy's linear algebra implementation is more robust than the classical method, though. They are computed solving the eigensystem of the companion matrix. For low degrees, you can work out the analytical solutions and see which one would be more stable. https://en.wikipedia.org/wiki/Companion_matrix /David. _______________________________________________ SciPy-User mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Sergio Rojas
Following up on my question, thanks to Evgeni Burovski [ https://mail.scipy.org/pipermail/scipy-user/2015-November/036753.html ] and David [ https://mail.scipy.org/pipermail/scipy-user/2015-November/036754.html ] I just want to add that on Fortran 90, via real*16 (also called quad precision) the output for the roots are (not sure about the rearrangement proposed by David in this case): FORTRAN 90 (via real*16) a = 1.00000000362749372553872184710144211E-0015 b = 10000.000000000000000000000000000000 c = 1.0000000000000000000000000000000000 x1 = -9999999963725062876.1997883398821330 x2 = -1.00000000000000000000001000000003626E-0004 a*x1**2 + b*x1 + c = 0.0000000000000000000000000000000000 (a*x1 + b)*x1 + c = -1.05518034299496531738542345583594055E-0011 a*x2**2 + b*x2 + c = 0.0000000000000000000000000000000000 (a*x2 + b)*x2 + c = 0.0000000000000000000000000000000000 (x1 + x2) + b/a = 0.0000000000000000000000000000000000 x1*x2 - c/a = 0.0000000000000000000000000000000000 Sergio Sent: Wednesday, November 04, 2015 at 1:58 PM From: "Sergio Rojas" <[hidden email]> To: [hidden email] Subject: Numerical precision: Scipy Vs Numpy """ Having fun with numerical precision/errors, a solution of the quadratic equation [a*x**2 + b*x + c == 0] with small values for the constant a is given an unexpected result when comparing SciPy and SymPy numerical results, which I hope an earthly kind soul could explain. The output is as follows (the bothering result is on the second root property): SciPy a*x1**2 + b*x1 + c = 16777217.0 a*x2**2 + b*x2 + c = 1.11022302463e-16 Root properties: (x1 + x2) + b/a = 0.0 x1*x2 - c/a = 0.0 ---- SymPy a*x1**2 + b*x1 + c = 16777217.0000000 a*x2**2 + b*x2 + c = 0 Root properties: (x1 + x2) + b/a = 0 x1*x2 - c/a = 0.125000000000000 """ import scipy def myprint(a, b, c, x1, x2): print('a*x1**2 + b*x1 + c = '), print(a*x1**2 + b*x1 + c) print('a*x2**2 + b*x2 + c = '), print(a*x2**2 + b*x2 + c) print("\t Root properties: ") print('(x1 + x2) + b/a = '), print((x1+x2) + float(b)/float(a)) print(' x1*x2 - c/a = '), print(x1*x2 - float(c)/float(a)) a = 1.0e-15 b = 10000.0 c = 1.0 coeff = [a, b, c] x1, x2 = scipy.roots(coeff) print("\t SciPy ") myprint(a, b, c, x1, x2) from sympy import * var('x') x1, x2 = solve(a*x**2 + b*x + c, x) print(" \t ---- \n \t SymPy ") myprint(a, b, c, x1, x2) # Thanks in advance, # Sergio _______________________________________________ SciPy-User mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/scipy-user |
Free forum by Nabble | Edit this page |