help with precision for big numbers

classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|

help with precision for big numbers

Johann Cohen-Tanugi-2
Hello,
I am computing :

In [22]: for i in range(6):
    s=(1.+Toff/Ton)**i*sp.factorial(Non+Noff-i)/sp.factorial(Non-i)
    print "%.14g"%s
   ....:    
   ....:    
4.3585218122217e+42
9.7493251062853e+42
1.7917678573714e+43
2.5383377979428e+43
2.4658138608587e+43
1.2329069304293e+43

A colleague using GSL and C code with double precision and long double (
I am not sure whether he has a 64bit machine) obtained the following
values :
4.3585218122216e+42
1.0131651581042e+43
1.9350541758386e+43
2.8488297588735e+43
2.8759614708627e+43
1.4943721368208e+43

Close but not identical...... I was wondering if there is a way to
increase numerical accuracy within scipy, assuming the standard behavior
is not optimal with this respect. Or any other thoughts about these
discrepancies? Or some nifty tricks to recover lost precision by
organizing the computation differently?

thanks in advance,
Johann
_______________________________________________
SciPy-user mailing list
[hidden email]
http://projects.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: help with precision for big numbers

Johann Cohen-Tanugi-2
sorry, for people who might want to check with their computers :
    Non=5
    Noff=33
    Ton=60
    Toff=1000

Johann Cohen-Tanugi wrote:

> Hello,
> I am computing :
>
> In [22]: for i in range(6):
>     s=(1.+Toff/Ton)**i*sp.factorial(Non+Noff-i)/sp.factorial(Non-i)
>     print "%.14g"%s
>    ....:    
>    ....:    
> 4.3585218122217e+42
> 9.7493251062853e+42
> 1.7917678573714e+43
> 2.5383377979428e+43
> 2.4658138608587e+43
> 1.2329069304293e+43
>
> A colleague using GSL and C code with double precision and long double (
> I am not sure whether he has a 64bit machine) obtained the following
> values :
> 4.3585218122216e+42
> 1.0131651581042e+43
> 1.9350541758386e+43
> 2.8488297588735e+43
> 2.8759614708627e+43
> 1.4943721368208e+43
>
> Close but not identical...... I was wondering if there is a way to
> increase numerical accuracy within scipy, assuming the standard behavior
> is not optimal with this respect. Or any other thoughts about these
> discrepancies? Or some nifty tricks to recover lost precision by
> organizing the computation differently?
>
> thanks in advance,
> Johann
> _______________________________________________
> SciPy-user mailing list
> [hidden email]
> http://projects.scipy.org/mailman/listinfo/scipy-user
>  
_______________________________________________
SciPy-user mailing list
[hidden email]
http://projects.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: help with precision for big numbers

Damian Eads-2
In reply to this post by Johann Cohen-Tanugi-2
Hi Johann,

First off, the first part of the expression s=... yields two different
answers, depending whether you cast Toff to a float or not.

In [1]: 1.+Toff/Ton
Out[1]: 17.0

In [2]: 1.+float(Toff)/Ton
Out[2]: 17.666666666666668

Which is the desired behavior for your problem?

The limit of precision of floating point numbers in native Python is
32-bit. Numpy defines extra scalar types and you will find most of the
ones supported by your machine in the numpy package. np.float64 will
give you 64-bit precision. There is a np.float96 for 96-bit floats but
I've never used it before.

Damian

Johann Cohen-Tanugi wrote:

> Hello,
> I am computing :
>
> In [22]: for i in range(6):
>     s=(1.+Toff/Ton)**i*sp.factorial(Non+Noff-i)/sp.factorial(Non-i)
>     print "%.14g"%s
>    ....:    
>    ....:    
> 4.3585218122217e+42
> 9.7493251062853e+42
> 1.7917678573714e+43
> 2.5383377979428e+43
> 2.4658138608587e+43
> 1.2329069304293e+43
>
> A colleague using GSL and C code with double precision and long double (
> I am not sure whether he has a 64bit machine) obtained the following
> values :
> 4.3585218122216e+42
> 1.0131651581042e+43
> 1.9350541758386e+43
> 2.8488297588735e+43
> 2.8759614708627e+43
> 1.4943721368208e+43
>
> Close but not identical...... I was wondering if there is a way to
> increase numerical accuracy within scipy, assuming the standard behavior
> is not optimal with this respect. Or any other thoughts about these
> discrepancies? Or some nifty tricks to recover lost precision by
> organizing the computation differently?
>
> thanks in advance,
> Johann
> _______________________________________________
> SciPy-user mailing list
> [hidden email]
> http://projects.scipy.org/mailman/listinfo/scipy-user

_______________________________________________
SciPy-user mailing list
[hidden email]
http://projects.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: help with precision for big numbers

Johann Cohen-Tanugi-2
thanks Damian,
dammit, yes of course I got bitten again by the int/int=int feature of
python 2.x. Supposed to vanish in python 3, right..... looking forward
to it!
Johann

Damian Eads wrote:

> Hi Johann,
>
> First off, the first part of the expression s=... yields two different
> answers, depending whether you cast Toff to a float or not.
>
> In [1]: 1.+Toff/Ton
> Out[1]: 17.0
>
> In [2]: 1.+float(Toff)/Ton
> Out[2]: 17.666666666666668
>
> Which is the desired behavior for your problem?
>
> The limit of precision of floating point numbers in native Python is
> 32-bit. Numpy defines extra scalar types and you will find most of the
> ones supported by your machine in the numpy package. np.float64 will
> give you 64-bit precision. There is a np.float96 for 96-bit floats but
> I've never used it before.
>
> Damian
>
> Johann Cohen-Tanugi wrote:
>  
>> Hello,
>> I am computing :
>>
>> In [22]: for i in range(6):
>>     s=(1.+Toff/Ton)**i*sp.factorial(Non+Noff-i)/sp.factorial(Non-i)
>>     print "%.14g"%s
>>    ....:    
>>    ....:    
>> 4.3585218122217e+42
>> 9.7493251062853e+42
>> 1.7917678573714e+43
>> 2.5383377979428e+43
>> 2.4658138608587e+43
>> 1.2329069304293e+43
>>
>> A colleague using GSL and C code with double precision and long double (
>> I am not sure whether he has a 64bit machine) obtained the following
>> values :
>> 4.3585218122216e+42
>> 1.0131651581042e+43
>> 1.9350541758386e+43
>> 2.8488297588735e+43
>> 2.8759614708627e+43
>> 1.4943721368208e+43
>>
>> Close but not identical...... I was wondering if there is a way to
>> increase numerical accuracy within scipy, assuming the standard behavior
>> is not optimal with this respect. Or any other thoughts about these
>> discrepancies? Or some nifty tricks to recover lost precision by
>> organizing the computation differently?
>>
>> thanks in advance,
>> Johann
>> _______________________________________________
>> SciPy-user mailing list
>> [hidden email]
>> http://projects.scipy.org/mailman/listinfo/scipy-user
>>    
>
> _______________________________________________
> SciPy-user mailing list
> [hidden email]
> http://projects.scipy.org/mailman/listinfo/scipy-user
>  
_______________________________________________
SciPy-user mailing list
[hidden email]
http://projects.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: help with precision for big numbers

Alan G Isaac
In reply to this post by Johann Cohen-Tanugi-2
On Tue, 13 May 2008, Johann Cohen-Tanugi apparently wrote:
> sp.factorial(Non+Noff-i)/sp.factorial(Non-i)

So Damien deduced the problem.  But if you are doing this
a lot, I wonder if you can get some gain by replacing the
above expression with a helper function. (I just mean to
observe that once you compute 33! the other values of this
expression are very simple multiples of 33!.

fwiw,
Alan Isaac



_______________________________________________
SciPy-user mailing list
[hidden email]
http://projects.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: help with precision for big numbers

Lev Givon
In reply to this post by Damian Eads-2
Received from Damian Eads on Tue, May 13, 2008 at 05:32:33AM EDT:

(snip)

> The limit of precision of floating point numbers in native Python is
> 32-bit. Numpy defines extra scalar types and you will find most of the
> ones supported by your machine in the numpy package. np.float64 will
> give you 64-bit precision. There is a np.float96 for 96-bit floats but
> I've never used it before.
>

128-bit floats are also available on certain machines.

> Damian

Although it isn't as fast as similar packages, mpmath is useful if one
occasionally needs to handle arbitrary precision in Python -
especially considering that it provides a number of special functions
(like gamma and factorial) apart from the usual transcendentals:

http://code.google.com/p/mpmath

                                                        L.G.
_______________________________________________
SciPy-user mailing list
[hidden email]
http://projects.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: help with precision for big numbers

Johann Cohen-Tanugi-2
In reply to this post by Alan G Isaac
Right, I was considering that in case I would hit a performance penalty,
but that would be in computation time, not in precision, right? But for
now I am not ready to trade code readability for performance.
And yes, that was worth it, and a valid point...As a matter of fact such
helper function could be part of scipy, if it is not there already.
Johann

Alan G Isaac wrote:

> On Tue, 13 May 2008, Johann Cohen-Tanugi apparently wrote:
>  
>> sp.factorial(Non+Noff-i)/sp.factorial(Non-i)
>>    
>
> So Damien deduced the problem.  But if you are doing this
> a lot, I wonder if you can get some gain by replacing the
> above expression with a helper function. (I just mean to
> observe that once you compute 33! the other values of this
> expression are very simple multiples of 33!.
>
> fwiw,
> Alan Isaac
>
>
>
> _______________________________________________
> SciPy-user mailing list
> [hidden email]
> http://projects.scipy.org/mailman/listinfo/scipy-user
>  
_______________________________________________
SciPy-user mailing list
[hidden email]
http://projects.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: help with precision for big numbers

Anne Archibald
In reply to this post by Johann Cohen-Tanugi-2
2008/5/13 Johann Cohen-Tanugi <[hidden email]>:
> thanks Damian,
>  dammit, yes of course I got bitten again by the int/int=int feature of
>  python 2.x. Supposed to vanish in python 3, right..... looking forward
>  to it!

Do you know about "from __future__ import true_division"?

Anne
_______________________________________________
SciPy-user mailing list
[hidden email]
http://projects.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: help with precision for big numbers

Johann Cohen-Tanugi-2
sure about the syntax?
In [5]: import __future__

In [6]: from __future__ import true_division
------------------------------------------------------------
SyntaxError: future feature true_division is not defined (<ipython
console>, line 1)

Johann


Anne Archibald wrote:

> 2008/5/13 Johann Cohen-Tanugi <[hidden email]>:
>  
>> thanks Damian,
>>  dammit, yes of course I got bitten again by the int/int=int feature of
>>  python 2.x. Supposed to vanish in python 3, right..... looking forward
>>  to it!
>>    
>
> Do you know about "from __future__ import true_division"?
>
> Anne
> _______________________________________________
> SciPy-user mailing list
> [hidden email]
> http://projects.scipy.org/mailman/listinfo/scipy-user
>  
_______________________________________________
SciPy-user mailing list
[hidden email]
http://projects.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: help with precision for big numbers

Alan G Isaac
In reply to this post by Anne Archibald
On Tue, 13 May 2008, Anne Archibald apparently wrote:
> Do you know about "from __future__ import true_division"?

``from __future__ import division``

Cheers,
Alan



_______________________________________________
SciPy-user mailing list
[hidden email]
http://projects.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: help with precision for big numbers

Robert Kern-2
In reply to this post by Damian Eads-2
On Tue, May 13, 2008 at 4:32 AM, Damian Eads <[hidden email]> wrote:
>  The limit of precision of floating point numbers in native Python is
>  32-bit.

Incorrect. Python floats are 64-bit doubles.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
 -- Umberto Eco
_______________________________________________
SciPy-user mailing list
[hidden email]
http://projects.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: help with precision for big numbers

Damian Eads-2
Robert Kern wrote:
> On Tue, May 13, 2008 at 4:32 AM, Damian Eads <[hidden email]> wrote:
>>  The limit of precision of floating point numbers in native Python is
>>  32-bit.
>
> Incorrect. Python floats are 64-bit doubles.

I stand corrected. It has been incorrect in my mind for quite some time
then.

Damian
_______________________________________________
SciPy-user mailing list
[hidden email]
http://projects.scipy.org/mailman/listinfo/scipy-user