# [SciPy-User] Find the convergent solutions of a multivariate equation efficiently

11 messages
Open this post in threaded view
|

## [SciPy-User] Find the convergent solutions of a multivariate equation efficiently

 Hello, I have a function: def my_function(x1, x2):      y1 = compute_y1(x1)      y2 = compute_y2(x2)      x1_new = compute_new_x1(x1, y1, y2)      x2_new = compute_new_x2(x2, y1, y2)      return x1_new, x2_new, y1, y2 I would like to find the convergent values of y1 and y2, that is when (x1_new - x1) / x1 < epsilon and (x2_new - x2) / x2 < epsilon. By now, I initialize x1 and x2 and I call main_function numerous times: epsilon = 0.01 x1, x2 = 0.1, 0.1 x1_new, x2_new = 100.0, 100.0 while abs((x1_new - x1)/x1) >= epsilon or abs((x2_new - x2)/x2) >= epsilon:      x1, x2 = x1_new, x2_new      x1_new, x2_new, y1, y2 = my_function(x1, x2) print 'y1 = ', y1 print 'y2 = ', y2 It works. But I wonder if it's the most efficient method. I read about scipy.optimize.minimize, but I can not see how it could be applied to my problem. Thanks in advance for your help. Cheers, Camille _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: Find the convergent solutions of a multivariate equation efficiently

 For scipy.optimize.minimize, the function needs to take all of the values as a tuple, and it should return a single y value. For a silly example:####def func(x):    x1 = x[0]    x2 = x[1]    y = x1**2 + x2**2    return yfit = scipy.optimize.minimize(func, [10, 3])####Right now, your function spits out two y values, which that function can't handle. The minimizer will do the work of "compute_new_x1", etc.I hope this helps you with your problem. If not, it might be useful to have a simple, but complete, example.RyanOn Sat, Mar 21, 2015 at 3:00 PM, Camille Chambon wrote:Hello, I have a function: def my_function(x1, x2):      y1 = compute_y1(x1)      y2 = compute_y2(x2)      x1_new = compute_new_x1(x1, y1, y2)      x2_new = compute_new_x2(x2, y1, y2)      return x1_new, x2_new, y1, y2 I would like to find the convergent values of y1 and y2, that is when (x1_new - x1) / x1 < epsilon and (x2_new - x2) / x2 < epsilon. By now, I initialize x1 and x2 and I call main_function numerous times: epsilon = 0.01 x1, x2 = 0.1, 0.1 x1_new, x2_new = 100.0, 100.0 while abs((x1_new - x1)/x1) >= epsilon or abs((x2_new - x2)/x2) >= epsilon:      x1, x2 = x1_new, x2_new      x1_new, x2_new, y1, y2 = my_function(x1, x2) print 'y1 = ', y1 print 'y2 = ', y2 It works. But I wonder if it's the most efficient method. I read about scipy.optimize.minimize, but I can not see how it could be applied to my problem. Thanks in advance for your help. Cheers, Camille _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: Find the convergent solutions of a multivariate equation efficiently

 Thanks. Le 21/03/2015 22:28, Ryan Nelson a écrit : For scipy.optimize.minimize, the function needs to take all of the values as a tuple, and it should return a single y value. For a silly example: #### def func(x):     x1 = x[0]     x2 = x[1]     y = x1**2 + x2**2     return y fit = scipy.optimize.minimize(func, [10, 3]) #### Right now, your function spits out two y values, which that function can't handle. The minimizer will do the work of "compute_new_x1", etc. OK. I understand that the minimizer will do the work of "compute_new_x1" and "compute_new_x2". But I do need to return several y values (in fact more than 2). So how could I achieve that? I hope this helps you with your problem. If not, it might be useful to have a simple, but complete, example. The (simplified) complete example is: ##### code ########## from __future__ import division def compute_var1(x1, x2):     return 2 * x1 + x2 def compute_var2(x1, x2):     return 2 * x1 - x2 def compute_new_x1(x1):     return x1 + 10 def compute_new_x2(x2):     return x2 + 10 x1_new, x2_new = 100.0, 100.0 x1, x2 = 0.1, 0.1 step = 0 while abs((x1_new - x1)/x1) >= 0.01 or abs((x2_new - x2)/x2) >= 0.01:     x1, x2 = x1_new, x2_new     var1 = compute_var1(x1, x2)     var2 = compute_var2(x1, x2)     x1_new = compute_new_x1(x1)     x2_new = compute_new_x2(x2)     step += 1 print 'var1=', var1, 'var2=', var2, 'in', step, 'steps' #################### Which prints: var1= 3030.0 var2= 1010.0 in 92 steps How could I make this more efficient ? Is there a scipy (or other) function for this purpose? Thanks again. Cheers, Camille Ryan On Sat, Mar 21, 2015 at 3:00 PM, Camille Chambon wrote: Hello, I have a function: def my_function(x1, x2):      y1 = compute_y1(x1)      y2 = compute_y2(x2)      x1_new = compute_new_x1(x1, y1, y2)      x2_new = compute_new_x2(x2, y1, y2)      return x1_new, x2_new, y1, y2 I would like to find the convergent values of y1 and y2, that is when (x1_new - x1) / x1 < epsilon and (x2_new - x2) / x2 < epsilon. By now, I initialize x1 and x2 and I call main_function numerous times: epsilon = 0.01 x1, x2 = 0.1, 0.1 x1_new, x2_new = 100.0, 100.0 while abs((x1_new - x1)/x1) >= epsilon or abs((x2_new - x2)/x2) >= epsilon:      x1, x2 = x1_new, x2_new      x1_new, x2_new, y1, y2 = my_function(x1, x2) print 'y1 = ', y1 print 'y2 = ', y2 It works. But I wonder if it's the most efficient method. I read about scipy.optimize.minimize, but I can not see how it could be applied to my problem. Thanks in advance for your help. Cheers, Camille _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: Find the convergent solutions of a multivariate equation efficiently

Open this post in threaded view
|

## Re: Find the convergent solutions of a multivariate equation efficiently

Open this post in threaded view
|

## Re: Find the convergent solutions of a multivariate equation efficiently

Open this post in threaded view
|

## Re: Find the convergent solutions of a multivariate equation efficiently

 Le 24/03/2015 13:34, Moore, Eric (NIH/NIDDK) [F] a écrit : > The fair comparison between the code you posted and optimize.fixed_point requires that you set xtol=0.01.  Is the fixed_point code still slower in that case? It does basically what your code does, but with some bells and whistles. It shouldn’t be meaningfully slower. > > There is also a pull request open to add a more sophisticated solver for fixed point problems (https://github.com/scipy/scipy/pull/4460). Here is the full example: #################### from __future__ import division def compute_var1(x1, x2):      return 2 * x1 + x2 def compute_var2(x1, x2):      return 2 * x1 - x2 def compute_new_x1(x1):      return x1 + 10 def compute_new_x2(x2):      return x2 + 10 def while_iterations():      x1_new, x2_new = 100.0, 100.0      x1, x2 = 0.1, 0.1      step = 0      while abs((x1_new - x1)/x1) >= 0.01 or abs((x2_new - x2)/x2) >= 0.01:          x1, x2 = x1_new, x2_new          var1 = compute_var1(x1, x2)          var2 = compute_var2(x1, x2)          x1_new = compute_new_x1(x1)          x2_new = compute_new_x2(x2)          step += 1      print 'var1=', var1, 'var2=', var2 def with_optimize_fixed_point():      import numpy as np      from scipy import optimize      def func(x, vars_):          x1 = x[0]          x2 = x[1]          vars_[0] = compute_var1(x1, x2)          vars_[1] = compute_var2(x1, x2)          x1_new = compute_new_x1(x1)          x2_new = compute_new_x2(x2)          return np.array([x1_new, x2_new])      vars_ = [0, 0]      x1_fixed, x2_fixed = optimize.fixed_point(func, [0.1, 0.1], args=(vars_,))      print 'var1=', vars_[0], 'var2=', vars_[1] def with_optimize_root():      def fun(x, vars_):          x1 = x[0]          x2 = x[1]          vars_[0] = compute_var1(x1, x2)          vars_[1] = compute_var2(x1, x2)          x1_new = compute_new_x1(x1)          x2_new = compute_new_x2(x2)          delta_x1 = abs((x1_new - x1)/x1)          delta_x2 = abs((x2_new - x2)/x2)          return [delta_x1,                  delta_x2]      from scipy import optimize      vars_ = [0, 0]      sol = optimize.root(fun, [0.1, 0.1], vars_)      print 'var1=', vars_[0], 'var2=', vars_[1] ########################## while_iterations() prints: var1= 3030.0 var2= 1010.0 with_optimize_fixed_point() prints: /usr/lib64/python2.7/site-packages/scipy/optimize/minpack.py:608: RuntimeWarning: divide by zero encountered in true_divide    p = where(d == 0, p2, p0 - (p1 - p0)*(p1 - p0) / d) var1= -1.40371936438e+17 var2= -4.67906454792e+16 with_optimize_root() prints: /usr/lib64/python2.7/site-packages/scipy/optimize/minpack.py:227: RuntimeWarning: The iteration is not making good progress, as measured by the    improvement from the last ten iterations.    warnings.warn(msg, RuntimeWarning) var1= 7.76159959998e+16 var2= 2.61013612882e+16 I didn't manage to obtain similar results neither with scipy.optimize.fixed_point nor with scipy.optimize.root yet. Do I use scipy.optimize.fixed_point and scipy.optimize.root in a bad way? Camille _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: Find the convergent solutions of a multivariate equation efficiently

 24.03.2015, 15:47, Camille Chambon kirjoitti: [clip] > Do I use scipy.optimize.fixed_point and scipy.optimize.root in a bad way? Your iteration is:         x1[n+1] = x1[n] + 10         x2[n+1] = x2[n] + 10 It has no fixed point, both x1 and x2 diverge to infinity. The results returned by your while_iterations() do not have meaning, they depend on the tolerance 0.01 you used. If you use smaller tolerance, the results diverge. _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: Find the convergent solutions of a multivariate equation efficiently

 Yes. Sorry. I wrote this simplified system quickly without thinking of its meaningfulness. The original system is convergent, but too much complex to be shared and can not be simplified. To summary, my system looks like: y1 = g1(x1, x2) y2 = g2(y1) next_x1 = f1(y1, y2) next_x2 = f2(y2, x2) (next_x1 - x1) / x1 < epsilon (next_x2 - x2) / x2 < epsilon What I want to compute is y1 and y2. I can do it like that: x1, x2 = x1_0, x2_0 while True:     y1 = g1(x1, x2)     y2 = g2(y1)     next_x1 = f1(y1, y2)     next_x2 = f2(y2, x2)     if abs((x1_new - x1)/x1) < epsilon and abs((x2_new - x2)/x2) < epsilon:         break print 'y1', y1, 'y2', y2     I also manage to use scipy.optimize.root: def fun(x, y1_y2):     x1 = x[0]     x2 = x[1]     y1 = g1(x1, x2)     y2 = g2(y1)     next_x1 = f1(y1, y2)     next_x2 = f2(y2, x2)     y1_y2[:] = y1, y2     return [(next_x1 - x1) / x1, (next_x2 - x2) / x2]         from scipy import optimize import numpy as np y1_y2 = np.zeros(2) sol = optimize.root(fun, [x1_0, x2_0], (y1_y2,)) print 'y1', y1_y2[0], 'y2', y1_y2[1] Both work fine. But using timeit(...), the solution with scipy.optimize.root is twenty times slower than the one with the while loop. Is it normal? Maybe scipy.optimize.root is much slower because I don't give it a Jacobian ? But I can't define the Jacobian matrix because my system is too complex. Maybe scipy.optimize.root is just not appropriate to solve my system? Thanks again for your help. Cheers, Camille Le 24/03/2015 19:21, Pauli Virtanen a écrit : 24.03.2015, 15:47, Camille Chambon kirjoitti: [clip] Do I use scipy.optimize.fixed_point and scipy.optimize.root in a bad way? Your iteration is: x1[n+1] = x1[n] + 10 x2[n+1] = x2[n] + 10 It has no fixed point, both x1 and x2 diverge to infinity. The results returned by your while_iterations() do not have meaning, they depend on the tolerance 0.01 you used. If you use smaller tolerance, the results diverge. _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user