[SciPy-User] Is this weird answer a bug in optimize.newton?

9 messages
Open this post in threaded view
|

[SciPy-User] Is this weird answer a bug in optimize.newton?

 Hi, with my students, this afternoon, I encountered the following annoying behaviour of Scipy: >>> from scipy import optimize >>> optimize.newton(lambda x: x**3-x+1, 0) 0.999999989980082 This is very weird, because the equation is rather simple with simple coefficients, and the initial point is simple also. Any teacher or student could fall into this case when trying to use optimize.newton. Of course, the answer is absolutely wrong. First, some facts:    * using the function with the derivative gives the correct answer (we are not      in some tricky case);    * my students wrote pieces of code with Newton-Raphson algorithm returning      the expected answer even without the derivative; they guessed the value of      the derivative either by some (f(x+eps)-f(x-eps))/(2*eps) or      (f(x+eps)-f(x))/eps, etc. All these methods worked well. I had a glance in the source code and found that:    * the function is actually thinking the answer is correct (the answer has      nothing to do with iterating too many times) because two consecutive values      of x are close "enough".    * the answer is returned after only 3 iterations (correct answer requires      about 20 iterations). I think it has to do with the way the derivative is computed by Scipy (in order to call the f function only once in each iteration, the secant is computed by using f(x) for consecutive values of x (rather than by using some epsilon); furthermore, in order to start this method the function has to provide an arbitrary "second" value of x). Is it a bug? Regards. -- Thomas Baruchel _______________________________________________ SciPy-User mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

Re: Is this weird answer a bug in optimize.newton?

 On 24 February 2016 at 19:43, Thomas Baruchel <[hidden email]> wrote: > with my students, this afternoon, I encountered the following annoying > behaviour of Scipy: > >>>> from scipy import optimize >>>> optimize.newton(lambda x: x**3-x+1, 0) > > 0.999999989980082 > > This is very weird, because the equation is rather simple with simple > coefficients, > and the initial point is simple also. Somehow the initial guess is problematic: In [1]: from scipy import optimize In [2]: optimize.newton(lambda x: x**3 - x + 1, 0) Out[2]: 0.999999989980082 In [3]: optimize.newton(lambda x: x**3 - x + 1, 0.1) Out[3]: -1.324717957244745 In [4]: optimize.newton(lambda x: x**3 - x + 1, -0.1) Out[4]: -1.3247179572447458 These last two are the correct answer: In [6]: from numpy import roots In [7]: roots([1, 0, -1, 1]) Out[7]: array([-1.32471796+0.j        ,  0.66235898+0.56227951j,         0.66235898-0.56227951j]) The only thing that jumps out to me is that you've picked an initial guess at which the function has a zero second derivative. I'm not sure why that would cause the method to fail but Newton solvers are sensitive to various singularities in the input function. -- Oscar _______________________________________________ SciPy-User mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

Re: Is this weird answer a bug in optimize.newton?

 First of all it sounds like your students are receiving a very good educational experience!typing help(optimize.newton) returns the following:   fprime : function, optional        The derivative of the function when available and convenient. If it        is None (default), then the secant method is used.A look in https://en.wikipedia.org/wiki/Secant_method#Convergence returns the following:"If the initial values are not close enough to the root, then there is no guarantee that the secant method converges."So it looks like the behavior of scipy.optimize.newton in your example is neither unexpected nor a bug. On Thu, Feb 25, 2016 at 7:36 AM, Oscar Benjamin wrote:On 24 February 2016 at 19:43, Thomas Baruchel <[hidden email]> wrote: > with my students, this afternoon, I encountered the following annoying > behaviour of Scipy: > >>>> from scipy import optimize >>>> optimize.newton(lambda x: x**3-x+1, 0) > > 0.999999989980082 > > This is very weird, because the equation is rather simple with simple > coefficients, > and the initial point is simple also. Somehow the initial guess is problematic: In [1]: from scipy import optimize In [2]: optimize.newton(lambda x: x**3 - x + 1, 0) Out[2]: 0.999999989980082 In [3]: optimize.newton(lambda x: x**3 - x + 1, 0.1) Out[3]: -1.324717957244745 In [4]: optimize.newton(lambda x: x**3 - x + 1, -0.1) Out[4]: -1.3247179572447458 These last two are the correct answer: In [6]: from numpy import roots In [7]: roots([1, 0, -1, 1]) Out[7]: array([-1.32471796+0.j        ,  0.66235898+0.56227951j,         0.66235898-0.56227951j]) The only thing that jumps out to me is that you've picked an initial guess at which the function has a zero second derivative. I'm not sure why that would cause the method to fail but Newton solvers are sensitive to various singularities in the input function. -- Oscar _______________________________________________ SciPy-User mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

Re: Is this weird answer a bug in optimize.newton?

 Sorry - you are talking about the fact that it returns a wrong answer without indication that ti's wrong.Don't know if that's a "bug" but it's certainly incorrect behavior inherent to the secant method.On Thu, Feb 25, 2016 at 8:21 AM, David Mikolas wrote:First of all it sounds like your students are receiving a very good educational experience!typing help(optimize.newton) returns the following:   fprime : function, optional        The derivative of the function when available and convenient. If it        is None (default), then the secant method is used.A look in https://en.wikipedia.org/wiki/Secant_method#Convergence returns the following:"If the initial values are not close enough to the root, then there is no guarantee that the secant method converges."So it looks like the behavior of scipy.optimize.newton in your example is neither unexpected nor a bug. On Thu, Feb 25, 2016 at 7:36 AM, Oscar Benjamin wrote:On 24 February 2016 at 19:43, Thomas Baruchel <[hidden email]> wrote: > with my students, this afternoon, I encountered the following annoying > behaviour of Scipy: > >>>> from scipy import optimize >>>> optimize.newton(lambda x: x**3-x+1, 0) > > 0.999999989980082 > > This is very weird, because the equation is rather simple with simple > coefficients, > and the initial point is simple also. Somehow the initial guess is problematic: In [1]: from scipy import optimize In [2]: optimize.newton(lambda x: x**3 - x + 1, 0) Out[2]: 0.999999989980082 In [3]: optimize.newton(lambda x: x**3 - x + 1, 0.1) Out[3]: -1.324717957244745 In [4]: optimize.newton(lambda x: x**3 - x + 1, -0.1) Out[4]: -1.3247179572447458 These last two are the correct answer: In [6]: from numpy import roots In [7]: roots([1, 0, -1, 1]) Out[7]: array([-1.32471796+0.j        ,  0.66235898+0.56227951j,         0.66235898-0.56227951j]) The only thing that jumps out to me is that you've picked an initial guess at which the function has a zero second derivative. I'm not sure why that would cause the method to fail but Newton solvers are sensitive to various singularities in the input function. -- Oscar _______________________________________________ SciPy-User mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

Re: Is this weird answer a bug in optimize.newton?

 In reply to this post by David Mikolas Brent's method (scipy.optimize.brentq) is probably the best general scalar (as opposed to multidimensional) root-finding algorithm, it's the scipy equivalent of MATLAB's fzero. Worth a read of Numerical Recipes to understand the different root-finding algorithms. The previous (2nd) edition is readable for free online: http://apps.nrbook.com/c/index.html and root-finding starts on p. 347. Chandrupatla's algorithm appears to be "better" than Brent's in the sense that both are robust and Chandrupatla's usually converges faster by a factor of 2-3x (and if not then follows Brent's within an extra iteration or two) -- but it's not well-known. I have a writeup here: http://www.embeddedrelated.com/showarticle/855.phpOn Wed, Feb 24, 2016 at 5:21 PM, David Mikolas <[hidden email]> wrote: > First of all it sounds like your students are receiving a very good > educational experience! > > typing help(optimize.newton) returns the following: > >    fprime : function, optional >         The derivative of the function when available and convenient. If it >         is None (default), then the secant method is used. > > A look in https://en.wikipedia.org/wiki/Secant_method#Convergence returns > the following: > > "If the initial values are not close enough to the root, then there is no > guarantee that the secant method converges." > > So it looks like the behavior of scipy.optimize.newton in your example is > neither unexpected nor a bug. > > > > On Thu, Feb 25, 2016 at 7:36 AM, Oscar Benjamin <[hidden email]> > wrote: >> >> On 24 February 2016 at 19:43, Thomas Baruchel <[hidden email]> wrote: >> > with my students, this afternoon, I encountered the following annoying >> > behaviour of Scipy: >> > >> >>>> from scipy import optimize >> >>>> optimize.newton(lambda x: x**3-x+1, 0) >> > >> > 0.999999989980082 >> > >> > This is very weird, because the equation is rather simple with simple >> > coefficients, >> > and the initial point is simple also. >> >> Somehow the initial guess is problematic: >> >> In [1]: from scipy import optimize >> >> In [2]: optimize.newton(lambda x: x**3 - x + 1, 0) >> Out[2]: 0.999999989980082 >> >> In [3]: optimize.newton(lambda x: x**3 - x + 1, 0.1) >> Out[3]: -1.324717957244745 >> >> In [4]: optimize.newton(lambda x: x**3 - x + 1, -0.1) >> Out[4]: -1.3247179572447458 >> >> These last two are the correct answer: >> >> In [6]: from numpy import roots >> >> In [7]: roots([1, 0, -1, 1]) >> Out[7]: >> array([-1.32471796+0.j        ,  0.66235898+0.56227951j, >>         0.66235898-0.56227951j]) >> >> The only thing that jumps out to me is that you've picked an initial >> guess at which the function has a zero second derivative. I'm not sure >> why that would cause the method to fail but Newton solvers are >> sensitive to various singularities in the input function. >> >> -- >> Oscar >> _______________________________________________ >> SciPy-User mailing list >> [hidden email] >> https://mail.scipy.org/mailman/listinfo/scipy-user> > > > _______________________________________________ > SciPy-User mailing list > [hidden email] > https://mail.scipy.org/mailman/listinfo/scipy-user> _______________________________________________ SciPy-User mailing list [hidden email] https://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

Re: Is this weird answer a bug in optimize.newton?

Open this post in threaded view
|