# [SciPy-User] scipy.optimize.root with method = 'lm' problem

3 messages
Open this post in threaded view
|
Report Content as Inappropriate

## [SciPy-User] scipy.optimize.root with method = 'lm' problem

 I am using scipy.optimize.root with method = 'lm' and jac=True and the descent algorithm looks like it is taking steps that are far too large.I have 10 variables and my goal function returns a vector of 4374 outputs and 4374x10 for the jacobian. The initial jacobian matches the same one I generated when I use lsqnonlin in matlab and that works just fine. Instead with scipy I am getting no changes at all for the first two iterations and then suddenly a huge jump to basically the upper end of the double range which seems pretty extreme.I would really like to make this work with scipy and I have my function along with the exact derivatives for the jacobian computed with AD. I have also looked at the singular values of the jacobian and they are all positive and non-zero so the system should be locally convex at least.This is the call I make to scipy and it seems reasonable sol = scipy.optimize.root(residual, start, args=(large, small, weight, data), method='lm', jac=True, options={'ftol':1e-6, 'xtol':1e-6})Thank you _______________________________________________ SciPy-User mailing list [hidden email] https://mail.python.org/mailman/listinfo/scipy-user
Open this post in threaded view
|
Report Content as Inappropriate

## Re: scipy.optimize.root with method = 'lm' problem

 Hi,There isn't much to advise you without any data or way to reproduce this. As a small value is added to the Jacobian to keep it positive semi-definite in the algorithm, the fact that it diverges seems to indicate that the eigenvalues of your system are not (or it is a bug, but for this, we will need more data to reproduce the issue).Can you output at each stage your Jacobian and check that it doesn't present that kind of behavior?Cheers,Matthieu2016-12-27 15:13 GMT+01:00 William Heymann :I am using scipy.optimize.root with method = 'lm' and jac=True and the descent algorithm looks like it is taking steps that are far too large.I have 10 variables and my goal function returns a vector of 4374 outputs and 4374x10 for the jacobian. The initial jacobian matches the same one I generated when I use lsqnonlin in matlab and that works just fine. Instead with scipy I am getting no changes at all for the first two iterations and then suddenly a huge jump to basically the upper end of the double range which seems pretty extreme.I would really like to make this work with scipy and I have my function along with the exact derivatives for the jacobian computed with AD. I have also looked at the singular values of the jacobian and they are all positive and non-zero so the system should be locally convex at least.This is the call I make to scipy and it seems reasonable sol = scipy.optimize.root(residual, start, args=(large, small, weight, data), method='lm', jac=True, options={'ftol':1e-6, 'xtol':1e-6})Thank you _______________________________________________ SciPy-User mailing list [hidden email] https://mail.python.org/mailman/listinfo/scipy-user -- Information System Engineer, Ph.D.Blog: http://blog.audio-tk.com/LinkedIn: http://www.linkedin.com/in/matthieubrucher _______________________________________________ SciPy-User mailing list [hidden email] https://mail.python.org/mailman/listinfo/scipy-user