[SciPy-User] Find the convergent solutions of a multivariate equation efficiently

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

[SciPy-User] Find the convergent solutions of a multivariate equation efficiently

CamilleC
Hello,

I have a function:

def my_function(x1, x2):
     y1 = compute_y1(x1)
     y2 = compute_y2(x2)
     x1_new = compute_new_x1(x1, y1, y2)
     x2_new = compute_new_x2(x2, y1, y2)
     return x1_new, x2_new, y1, y2

I would like to find the convergent values of y1 and y2, that is when
(x1_new - x1) / x1 < epsilon and (x2_new - x2) / x2 < epsilon.

By now, I initialize x1 and x2 and I call main_function numerous times:

epsilon = 0.01
x1, x2 = 0.1, 0.1
x1_new, x2_new = 100.0, 100.0
while abs((x1_new - x1)/x1) >= epsilon or abs((x2_new - x2)/x2) >= epsilon:
     x1, x2 = x1_new, x2_new
     x1_new, x2_new, y1, y2 = my_function(x1, x2)
print 'y1 = ', y1
print 'y2 = ', y2

It works. But I wonder if it's the most efficient method.

I read about scipy.optimize.minimize, but I can not see how it could be
applied to my problem.

Thanks in advance for your help.

Cheers,

Camille
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: Find the convergent solutions of a multivariate equation efficiently

rcnelson
For scipy.optimize.minimize, the function needs to take all of the values as a tuple, and it should return a single y value. For a silly example:
####
def func(x):
    x1 = x[0]
    x2 = x[1]
    y = x1**2 + x2**2
    return y

fit = scipy.optimize.minimize(func, [10, 3])
####
Right now, your function spits out two y values, which that function can't handle. The minimizer will do the work of "compute_new_x1", etc.

I hope this helps you with your problem. If not, it might be useful to have a simple, but complete, example.

Ryan


On Sat, Mar 21, 2015 at 3:00 PM, Camille Chambon <[hidden email]> wrote:
Hello,

I have a function:

def my_function(x1, x2):
     y1 = compute_y1(x1)
     y2 = compute_y2(x2)
     x1_new = compute_new_x1(x1, y1, y2)
     x2_new = compute_new_x2(x2, y1, y2)
     return x1_new, x2_new, y1, y2

I would like to find the convergent values of y1 and y2, that is when
(x1_new - x1) / x1 < epsilon and (x2_new - x2) / x2 < epsilon.

By now, I initialize x1 and x2 and I call main_function numerous times:

epsilon = 0.01
x1, x2 = 0.1, 0.1
x1_new, x2_new = 100.0, 100.0
while abs((x1_new - x1)/x1) >= epsilon or abs((x2_new - x2)/x2) >= epsilon:
     x1, x2 = x1_new, x2_new
     x1_new, x2_new, y1, y2 = my_function(x1, x2)
print 'y1 = ', y1
print 'y2 = ', y2

It works. But I wonder if it's the most efficient method.

I read about scipy.optimize.minimize, but I can not see how it could be
applied to my problem.

Thanks in advance for your help.

Cheers,

Camille
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user


_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: Find the convergent solutions of a multivariate equation efficiently

CamilleC
Thanks.

Le 21/03/2015 22:28, Ryan Nelson a écrit :
For scipy.optimize.minimize, the function needs to take all of the values as a tuple, and it should return a single y value. For a silly example:
####
def func(x):
    x1 = x[0]
    x2 = x[1]
    y = x1**2 + x2**2
    return y

fit = scipy.optimize.minimize(func, [10, 3])
####
Right now, your function spits out two y values, which that function can't handle. The minimizer will do the work of "compute_new_x1", etc.
OK. I understand that the minimizer will do the work of "compute_new_x1" and "compute_new_x2". But I do need to return several y values (in fact more than 2). So how could I achieve that?

I hope this helps you with your problem. If not, it might be useful to have a simple, but complete, example.
The (simplified) complete example is:


##### code ##########

from __future__ import division

def compute_var1(x1, x2):
    return 2 * x1 + x2

def compute_var2(x1, x2):
    return 2 * x1 - x2

def compute_new_x1(x1):
    return x1 + 10

def compute_new_x2(x2):
    return x2 + 10

x1_new, x2_new = 100.0, 100.0
x1, x2 = 0.1, 0.1
step = 0
while abs((x1_new - x1)/x1) >= 0.01 or abs((x2_new - x2)/x2) >= 0.01:
    x1, x2 = x1_new, x2_new
    var1 = compute_var1(x1, x2)
    var2 = compute_var2(x1, x2)
    x1_new = compute_new_x1(x1)
    x2_new = compute_new_x2(x2)
    step += 1

print 'var1=', var1, 'var2=', var2, 'in', step, 'steps'

####################

Which prints: var1= 3030.0 var2= 1010.0 in 92 steps

How could I make this more efficient ? Is there a scipy (or other) function for this purpose?

Thanks again.

Cheers,

Camille


Ryan


On Sat, Mar 21, 2015 at 3:00 PM, Camille Chambon <[hidden email]> wrote:
Hello,

I have a function:

def my_function(x1, x2):
     y1 = compute_y1(x1)
     y2 = compute_y2(x2)
     x1_new = compute_new_x1(x1, y1, y2)
     x2_new = compute_new_x2(x2, y1, y2)
     return x1_new, x2_new, y1, y2

I would like to find the convergent values of y1 and y2, that is when
(x1_new - x1) / x1 < epsilon and (x2_new - x2) / x2 < epsilon.

By now, I initialize x1 and x2 and I call main_function numerous times:

epsilon = 0.01
x1, x2 = 0.1, 0.1
x1_new, x2_new = 100.0, 100.0
while abs((x1_new - x1)/x1) >= epsilon or abs((x2_new - x2)/x2) >= epsilon:
     x1, x2 = x1_new, x2_new
     x1_new, x2_new, y1, y2 = my_function(x1, x2)
print 'y1 = ', y1
print 'y2 = ', y2

It works. But I wonder if it's the most efficient method.

I read about scipy.optimize.minimize, but I can not see how it could be
applied to my problem.

Thanks in advance for your help.

Cheers,

Camille
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user



_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user


_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: Find the convergent solutions of a multivariate equation efficiently

CamilleC
In reply to this post by rcnelson
Hello Benjamin,

Thanks for your answer.

Using scipy.optimize.fixed_point increases computation time a lot. It is
not acceptable to me.

With scipy.optimize.minimize, the objective function must return only
one scalar. But my function returns two scalars. So I can't see how I
could apply scipy.optimize.minimize to my problem.

Thanks for your help anyway.

Cheers,

Camille

Le 23/03/2015 11:16, Benjamin Trendelkamp-Schroer a écrit :

> Hi Camille,
>
> my message did not get forwarded to the list so I am forwarding it to
> your adress, I hope this will help you.
>
> On 21.03.2015 20:00, Camille Chambon wrote:
>> Hello,
>>
>> I have a function:
>>
>> def my_function(x1, x2):
>>       y1 = compute_y1(x1)
>>       y2 = compute_y2(x2)
>>       x1_new = compute_new_x1(x1, y1, y2)
>>       x2_new = compute_new_x2(x2, y1, y2)
>>       return x1_new, x2_new, y1, y2
>>
>> I would like to find the convergent values of y1 and y2, that is when
>> (x1_new - x1) / x1 < epsilon and (x2_new - x2) / x2 < epsilon.
>>
>> By now, I initialize x1 and x2 and I call main_function numerous times:
>>
>> epsilon = 0.01
>> x1, x2 = 0.1, 0.1
>> x1_new, x2_new = 100.0, 100.0
>> while abs((x1_new - x1)/x1) >= epsilon or abs((x2_new - x2)/x2) >=
> epsilon:
>>       x1, x2 = x1_new, x2_new
>>       x1_new, x2_new, y1, y2 = my_function(x1, x2)
>> print 'y1 = ', y1
>> print 'y2 = ', y2
>>
>> It works. But I wonder if it's the most efficient method.
>>
>> I read about scipy.optimize.minimize, but I can not see how it could be
>> applied to my problem.
>>
>> Thanks in advance for your help.
>>
>> Cheers,
>>
>> Camille
>> _______________________________________________
>> SciPy-User mailing list
>> [hidden email]
>> http://mail.scipy.org/mailman/listinfo/scipy-user
> Hello Camille,
>
> if I understand correctly the function 'my_function' maps old values
> (x1, x2) to new values (x1_new, x2_new),
>
> (x1_new, x2_new) = f(x1, x2)
>
> and you want to find those values (x1*, x2*) which fulfill
>
> (x1*, x2*) = f(x1*, x2*)
>
> A point (x1*, x2*) with this property is called a fixed point for the
> mapping f and there is a scipy function that might help you to find this
> fixed point
>
> http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fixed_point.html
>
> Unfortunately these types of sef-consistent iterations can exhibit
> rather slow-convergence.
>
> If there is a way to formulate your problem as a minimization problem
> with an objective function g(x1, x2) for which you can compute the
> gradient G(x1, x2) you will be able to use methods for the minimization
> of a multi-variate function, e.g. gradient descent or Newton's method,
> which often converge much faster than self-consistent iterations. You
> can learn more about the available scipy functions at
>
> http://docs.scipy.org/doc/scipy/reference/optimize.html
>
> Some of them are also derivative free methods. So if you can not obtain
> the gradient of g you can still use them to find the minimizer of g.
>
> As Ryan suggested your function f needs to be able to process the tuple
> x=(x1, x2). I would recommend to store x1 and x2 in a numpy array x and
> have your function operate on the elements on that ndarray internally.
>
> A example function similar to yours, assuming that x1 and x2 are vectors
> consisting of 10 elements each, so that x has a total of  20 elements is
>
> def f(x):
>      x1 = x[0:10]
>      x2 = x[10:]
>         y1 = ...
>      y2 = ...
>
>      x1_new = ...
>      x2_new = ...
>
>      x_new = np.zeros(20)
>      x_new[0:10] = x1_new
>      x_new[10:] = x2_new
>
>      return x_new
>
> Now input x and output x_new are both numpy arrays with 20 elements and
> it shoud work with scipy.optimize.fixed_point.
>
> I hope this will help you,
>
> Benjamin
>
>
>
>

_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: Find the convergent solutions of a multivariate equation efficiently

rcnelson

On Mon, Mar 23, 2015 at 1:07 PM, Camille Chambon <[hidden email]> wrote:
Hello Benjamin,

Thanks for your answer.

Using scipy.optimize.fixed_point increases computation time a lot. It is
not acceptable to me.

With scipy.optimize.minimize, the objective function must return only
one scalar. But my function returns two scalars. So I can't see how I
could apply scipy.optimize.minimize to my problem.

Thanks for your help anyway.

Cheers,

Camille

Le 23/03/2015 11:16, Benjamin Trendelkamp-Schroer a écrit :
> Hi Camille,
>
> my message did not get forwarded to the list so I am forwarding it to
> your adress, I hope this will help you.
>
> On <a href="tel:21.03.2015%2020" value="+12103201520">21.03.2015 20:00, Camille Chambon wrote:
>> Hello,
>>
>> I have a function:
>>
>> def my_function(x1, x2):
>>       y1 = compute_y1(x1)
>>       y2 = compute_y2(x2)
>>       x1_new = compute_new_x1(x1, y1, y2)
>>       x2_new = compute_new_x2(x2, y1, y2)
>>       return x1_new, x2_new, y1, y2
>>
>> I would like to find the convergent values of y1 and y2, that is when
>> (x1_new - x1) / x1 < epsilon and (x2_new - x2) / x2 < epsilon.
>>
>> By now, I initialize x1 and x2 and I call main_function numerous times:
>>
>> epsilon = 0.01
>> x1, x2 = 0.1, 0.1
>> x1_new, x2_new = 100.0, 100.0
>> while abs((x1_new - x1)/x1) >= epsilon or abs((x2_new - x2)/x2) >=
> epsilon:
>>       x1, x2 = x1_new, x2_new
>>       x1_new, x2_new, y1, y2 = my_function(x1, x2)
>> print 'y1 = ', y1
>> print 'y2 = ', y2
>>
>> It works. But I wonder if it's the most efficient method.
>>
>> I read about scipy.optimize.minimize, but I can not see how it could be
>> applied to my problem.
>>
>> Thanks in advance for your help.
>>
>> Cheers,
>>
>> Camille
>> _______________________________________________
>> SciPy-User mailing list
>> [hidden email]
>> http://mail.scipy.org/mailman/listinfo/scipy-user
> Hello Camille,
>
> if I understand correctly the function 'my_function' maps old values
> (x1, x2) to new values (x1_new, x2_new),
>
> (x1_new, x2_new) = f(x1, x2)
>
> and you want to find those values (x1*, x2*) which fulfill
>
> (x1*, x2*) = f(x1*, x2*)
>
> A point (x1*, x2*) with this property is called a fixed point for the
> mapping f and there is a scipy function that might help you to find this
> fixed point
>
> http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fixed_point.html
>
> Unfortunately these types of sef-consistent iterations can exhibit
> rather slow-convergence.
>
> If there is a way to formulate your problem as a minimization problem
> with an objective function g(x1, x2) for which you can compute the
> gradient G(x1, x2) you will be able to use methods for the minimization
> of a multi-variate function, e.g. gradient descent or Newton's method,
> which often converge much faster than self-consistent iterations. You
> can learn more about the available scipy functions at
>
> http://docs.scipy.org/doc/scipy/reference/optimize.html
>
> Some of them are also derivative free methods. So if you can not obtain
> the gradient of g you can still use them to find the minimizer of g.
>
> As Ryan suggested your function f needs to be able to process the tuple
> x=(x1, x2). I would recommend to store x1 and x2 in a numpy array x and
> have your function operate on the elements on that ndarray internally.
>
> A example function similar to yours, assuming that x1 and x2 are vectors
> consisting of 10 elements each, so that x has a total of  20 elements is
>
> def f(x):
>      x1 = x[0:10]
>      x2 = x[10:]
>         y1 = ...
>      y2 = ...
>
>      x1_new = ...
>      x2_new = ...
>
>      x_new = np.zeros(20)
>      x_new[0:10] = x1_new
>      x_new[10:] = x2_new
>
>      return x_new
>
> Now input x and output x_new are both numpy arrays with 20 elements and
> it shoud work with scipy.optimize.fixed_point.
>
> I hope this will help you,
>
> Benjamin
>
>
>
>

_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user


_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: Find the convergent solutions of a multivariate equation efficiently

Moore, Eric (NIH/NIDDK) [F]
In reply to this post by CamilleC
> -----Original Message-----
> From: Camille Chambon [mailto:[hidden email]]
> Sent: Monday, March 23, 2015 1:07 PM
> To: Benjamin Trendelkamp-Schroer
> Cc: [hidden email]
> Subject: Re: [SciPy-User] Find the convergent solutions of a
> multivariate equation efficiently
>
> Hello Benjamin,
>
> Thanks for your answer.
>
> Using scipy.optimize.fixed_point increases computation time a lot. It
> is not acceptable to me.

The fair comparison between the code you posted and optimize.fixed_point requires that you set xtol=0.01.  Is the fixed_point code still slower in that case? It does basically what your code does, but with some bells and whistles. It shouldn’t be meaningfully slower.

There is also a pull request open to add a more sophisticated solver for fixed point problems (https://github.com/scipy/scipy/pull/4460).

>
> With scipy.optimize.minimize, the objective function must return only
> one scalar. But my function returns two scalars. So I can't see how I
> could apply scipy.optimize.minimize to my problem.

Optimize.minimize is not the tool for this job.

-Eric

>
> Thanks for your help anyway.
>
> Cheers,
>
> Camille
>
> Le 23/03/2015 11:16, Benjamin Trendelkamp-Schroer a écrit :
> > Hi Camille,
> >
> > my message did not get forwarded to the list so I am forwarding it to
> > your adress, I hope this will help you.
> >
> > On 21.03.2015 20:00, Camille Chambon wrote:
> >> Hello,
> >>
> >> I have a function:
> >>
> >> def my_function(x1, x2):
> >>       y1 = compute_y1(x1)
> >>       y2 = compute_y2(x2)
> >>       x1_new = compute_new_x1(x1, y1, y2)
> >>       x2_new = compute_new_x2(x2, y1, y2)
> >>       return x1_new, x2_new, y1, y2
> >>
> >> I would like to find the convergent values of y1 and y2, that is
> when
> >> (x1_new - x1) / x1 < epsilon and (x2_new - x2) / x2 < epsilon.
> >>
> >> By now, I initialize x1 and x2 and I call main_function numerous
> times:
> >>
> >> epsilon = 0.01
> >> x1, x2 = 0.1, 0.1
> >> x1_new, x2_new = 100.0, 100.0
> >> while abs((x1_new - x1)/x1) >= epsilon or abs((x2_new - x2)/x2) >=
> > epsilon:
> >>       x1, x2 = x1_new, x2_new
> >>       x1_new, x2_new, y1, y2 = my_function(x1, x2) print 'y1 = ', y1
> >> print 'y2 = ', y2
> >>
> >> It works. But I wonder if it's the most efficient method.
> >>
> >> I read about scipy.optimize.minimize, but I can not see how it could
> >> be applied to my problem.
> >>
> >> Thanks in advance for your help.
> >>
> >> Cheers,
> >>
> >> Camille
> >> _______________________________________________
> >> SciPy-User mailing list
> >> [hidden email]
> >> http://mail.scipy.org/mailman/listinfo/scipy-user
> > Hello Camille,
> >
> > if I understand correctly the function 'my_function' maps old values
> > (x1, x2) to new values (x1_new, x2_new),
> >
> > (x1_new, x2_new) = f(x1, x2)
> >
> > and you want to find those values (x1*, x2*) which fulfill
> >
> > (x1*, x2*) = f(x1*, x2*)
> >
> > A point (x1*, x2*) with this property is called a fixed point for the
> > mapping f and there is a scipy function that might help you to find
> > this fixed point
> >
> >
> http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fix
> > ed_point.html
> >
> > Unfortunately these types of sef-consistent iterations can exhibit
> > rather slow-convergence.
> >
> > If there is a way to formulate your problem as a minimization problem
> > with an objective function g(x1, x2) for which you can compute the
> > gradient G(x1, x2) you will be able to use methods for the
> > minimization of a multi-variate function, e.g. gradient descent or
> > Newton's method, which often converge much faster than self-
> consistent
> > iterations. You can learn more about the available scipy functions at
> >
> > http://docs.scipy.org/doc/scipy/reference/optimize.html
> >
> > Some of them are also derivative free methods. So if you can not
> > obtain the gradient of g you can still use them to find the minimizer
> of g.
> >
> > As Ryan suggested your function f needs to be able to process the
> > tuple x=(x1, x2). I would recommend to store x1 and x2 in a numpy
> > array x and have your function operate on the elements on that
> ndarray internally.
> >
> > A example function similar to yours, assuming that x1 and x2 are
> > vectors consisting of 10 elements each, so that x has a total of  20
> > elements is
> >
> > def f(x):
> >      x1 = x[0:10]
> >      x2 = x[10:]
> >         y1 = ...
> >      y2 = ...
> >
> >      x1_new = ...
> >      x2_new = ...
> >
> >      x_new = np.zeros(20)
> >      x_new[0:10] = x1_new
> >      x_new[10:] = x2_new
> >
> >      return x_new
> >
> > Now input x and output x_new are both numpy arrays with 20 elements
> > and it shoud work with scipy.optimize.fixed_point.
> >
> > I hope this will help you,
> >
> > Benjamin
> >
> >
> >
> >
>
> _______________________________________________
> SciPy-User mailing list
> [hidden email]
> http://mail.scipy.org/mailman/listinfo/scipy-user
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: Find the convergent solutions of a multivariate equation efficiently

CamilleC

Le 24/03/2015 13:34, Moore, Eric (NIH/NIDDK) [F] a écrit :
> The fair comparison between the code you posted and optimize.fixed_point requires that you set xtol=0.01.  Is the fixed_point code still slower in that case? It does basically what your code does, but with some bells and whistles. It shouldn’t be meaningfully slower.
>
> There is also a pull request open to add a more sophisticated solver for fixed point problems (https://github.com/scipy/scipy/pull/4460).
Here is the full example:

####################

from __future__ import division

def compute_var1(x1, x2):
     return 2 * x1 + x2

def compute_var2(x1, x2):
     return 2 * x1 - x2

def compute_new_x1(x1):
     return x1 + 10

def compute_new_x2(x2):
     return x2 + 10

def while_iterations():
     x1_new, x2_new = 100.0, 100.0
     x1, x2 = 0.1, 0.1
     step = 0
     while abs((x1_new - x1)/x1) >= 0.01 or abs((x2_new - x2)/x2) >= 0.01:
         x1, x2 = x1_new, x2_new
         var1 = compute_var1(x1, x2)
         var2 = compute_var2(x1, x2)
         x1_new = compute_new_x1(x1)
         x2_new = compute_new_x2(x2)
         step += 1
     print 'var1=', var1, 'var2=', var2

def with_optimize_fixed_point():
     import numpy as np
     from scipy import optimize
     def func(x, vars_):
         x1 = x[0]
         x2 = x[1]
         vars_[0] = compute_var1(x1, x2)
         vars_[1] = compute_var2(x1, x2)
         x1_new = compute_new_x1(x1)
         x2_new = compute_new_x2(x2)
         return np.array([x1_new, x2_new])
     vars_ = [0, 0]
     x1_fixed, x2_fixed = optimize.fixed_point(func, [0.1, 0.1],
args=(vars_,))
     print 'var1=', vars_[0], 'var2=', vars_[1]

def with_optimize_root():
     def fun(x, vars_):
         x1 = x[0]
         x2 = x[1]
         vars_[0] = compute_var1(x1, x2)
         vars_[1] = compute_var2(x1, x2)
         x1_new = compute_new_x1(x1)
         x2_new = compute_new_x2(x2)
         delta_x1 = abs((x1_new - x1)/x1)
         delta_x2 = abs((x2_new - x2)/x2)
         return [delta_x1,
                 delta_x2]
     from scipy import optimize
     vars_ = [0, 0]
     sol = optimize.root(fun, [0.1, 0.1], vars_)
     print 'var1=', vars_[0], 'var2=', vars_[1]

##########################

while_iterations() prints:

var1= 3030.0 var2= 1010.0


with_optimize_fixed_point() prints:

/usr/lib64/python2.7/site-packages/scipy/optimize/minpack.py:608:
RuntimeWarning: divide by zero encountered in true_divide
   p = where(d == 0, p2, p0 - (p1 - p0)*(p1 - p0) / d)
var1= -1.40371936438e+17 var2= -4.67906454792e+16


with_optimize_root() prints:

/usr/lib64/python2.7/site-packages/scipy/optimize/minpack.py:227:
RuntimeWarning: The iteration is not making good progress, as measured
by the
   improvement from the last ten iterations.
   warnings.warn(msg, RuntimeWarning)
var1= 7.76159959998e+16 var2= 2.61013612882e+16


I didn't manage to obtain similar results neither with
scipy.optimize.fixed_point nor with scipy.optimize.root yet.

Do I use scipy.optimize.fixed_point and scipy.optimize.root in a bad way?

Camille


_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: Find the convergent solutions of a multivariate equation efficiently

Pauli Virtanen-3
24.03.2015, 15:47, Camille Chambon kirjoitti:
[clip]
> Do I use scipy.optimize.fixed_point and scipy.optimize.root in a bad way?

Your iteration is:

        x1[n+1] = x1[n] + 10
        x2[n+1] = x2[n] + 10

It has no fixed point, both x1 and x2 diverge to infinity.

The results returned by your while_iterations() do not have meaning,
they depend on the tolerance 0.01 you used. If you use smaller
tolerance, the results diverge.

_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: Find the convergent solutions of a multivariate equation efficiently

CamilleC
Yes. Sorry. I wrote this simplified system quickly without thinking of its meaningfulness. The original system is convergent, but too much complex to be shared and can not be simplified.

To summary, my system looks like:
  • y1 = g1(x1, x2)
  • y2 = g2(y1)
  • next_x1 = f1(y1, y2)
  • next_x2 = f2(y2, x2)
  • (next_x1 - x1) / x1 < epsilon
  • (next_x2 - x2) / x2 < epsilon
What I want to compute is y1 and y2.

I can do it like that:

x1, x2 = x1_0, x2_0
while True:
    y1 = g1(x1, x2)
    y2 = g2(y1)
    next_x1 = f1(y1, y2)
    next_x2 = f2(y2, x2)
    if abs((x1_new - x1)/x1) < epsilon and abs((x2_new - x2)/x2) < epsilon:
        break
print 'y1', y1, 'y2', y2
   
I also manage to use scipy.optimize.root:

def fun(x, y1_y2):
    x1 = x[0]
    x2 = x[1]
    y1 = g1(x1, x2)
    y2 = g2(y1)
    next_x1 = f1(y1, y2)
    next_x2 = f2(y2, x2)
    y1_y2[:] = y1, y2
    return [(next_x1 - x1) / x1, (next_x2 - x2) / x2]
       
from scipy import optimize
import numpy as np
y1_y2 = np.zeros(2)
sol = optimize.root(fun, [x1_0, x2_0], (y1_y2,))
print 'y1', y1_y2[0], 'y2', y1_y2[1]

Both work fine. But using timeit(...), the solution with scipy.optimize.root is twenty times slower than the one with the while loop. Is it normal?
Maybe scipy.optimize.root is much slower because I don't give it a Jacobian ? But I can't define the Jacobian matrix because my system is too complex.
Maybe scipy.optimize.root is just not appropriate to solve my system?

Thanks again for your help.

Cheers,

Camille

Le 24/03/2015 19:21, Pauli Virtanen a écrit :
24.03.2015, 15:47, Camille Chambon kirjoitti:
[clip]
Do I use scipy.optimize.fixed_point and scipy.optimize.root in a bad way?
Your iteration is:

	x1[n+1] = x1[n] + 10
	x2[n+1] = x2[n] + 10

It has no fixed point, both x1 and x2 diverge to infinity.

The results returned by your while_iterations() do not have meaning,
they depend on the tolerance 0.01 you used. If you use smaller
tolerance, the results diverge.

_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user


_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: Find the convergent solutions of a multivariate equation efficiently

Pauli Virtanen-3
25.03.2015, 12:23, Camille Chambon kirjoitti:
[clip]
> Both work fine. But using timeit(...), the solution with
> scipy.optimize.root is twenty times slower than the one with the while
> loop. Is it normal?

The default accuracy in optimize.root is 10^{-8}. Maybe your epsilon was
much bigger?


_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: Find the convergent solutions of a multivariate equation efficiently

CamilleC
You are right. My epsilon is 0.01, which is much bigger than the default
accuracy in optimize.root.
But setting tol=0.01:

scipy.optimize.root(fun, x0, tol=0.01)

scipy.optimize.root is still 17 times slower than the while loop.

Camille

Le 25/03/2015 18:38, Pauli Virtanen a écrit :

> 25.03.2015, 12:23, Camille Chambon kirjoitti:
> [clip]
>> Both work fine. But using timeit(...), the solution with
>> scipy.optimize.root is twenty times slower than the one with the while
>> loop. Is it normal?
> The default accuracy in optimize.root is 10^{-8}. Maybe your epsilon was
> much bigger?
>
>
> _______________________________________________
> SciPy-User mailing list
> [hidden email]
> http://mail.scipy.org/mailman/listinfo/scipy-user

_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user