# [SciPy-User] Decimal dtype

6 messages
Open this post in threaded view
|

## [SciPy-User] Decimal dtype

 Traditional base-2 floating-point numbers have a lot of well-known issues.  The python standard library has a Decimal module that provides base-10 floating-point numbers, which avoid some (although not all) of these issues.  Is there any possibility of numpy having one or more dtypes for base-10 floating-point numbers?I understand fully if a lack of support from underlying libraries makes this infeasible at the present time.  I haven't been able to find much good information on the issue, which leads me to suspect the situation is probably not good. _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: Decimal dtype

 Is there a (hardware or not) fixed-size decimal format? Would that even be useful? Numpy's arrays are most useful for working with fixed-size quantities of homogeneous type for which operations are fast and can be carried out without going through python. None of that would appear to be true for decimals, even if one used a C-level decimal library. But numpy arrays can also be used to contain arbitrary python objects, such as arbitrary-precision numbers, binary or decimal. They won't be all that much faster than lists, but they do make most of numpy's array operations available.In [6]: a = np.array([decimal.Decimal(n) for n in range(10)])In [7]: aOut[7]: array([Decimal('0'), Decimal('1'), Decimal('2'), Decimal('3'),       Decimal('4'), Decimal('5'), Decimal('6'), Decimal('7'),       Decimal('8'), Decimal('9')], dtype=object)In [8]: a/decimal.Decimal(10)Out[8]: array([Decimal('0'), Decimal('0.1'), Decimal('0.2'), Decimal('0.3'),       Decimal('0.4'), Decimal('0.5'), Decimal('0.6'), Decimal('0.7'),       Decimal('0.8'), Decimal('0.9')], dtype=object)AnneOn Tue, Jul 28, 2015 at 3:32 PM Todd <[hidden email]> wrote:Traditional base-2 floating-point numbers have a lot of well-known issues.  The python standard library has a Decimal module that provides base-10 floating-point numbers, which avoid some (although not all) of these issues.  Is there any possibility of numpy having one or more dtypes for base-10 floating-point numbers?I understand fully if a lack of support from underlying libraries makes this infeasible at the present time.  I haven't been able to find much good information on the issue, which leads me to suspect the situation is probably not good. _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: Decimal dtype

 > Is there a (hardware or not) fixed-size decimal format? Would that even be useful? what about something like dec64?http://dec64.com/https://github.com/douglascrockford/DEC64Mark Daoust _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: Decimal dtype

 In reply to this post by Anne Archibald-3 On Tue, Jul 28, 2015 at 4:09 PM, Anne Archibald wrote: On Tue, Jul 28, 2015 at 3:32 PM Todd <[hidden email]> wrote:Traditional base-2 floating-point numbers have a lot of well-known issues.  The python standard library has a Decimal module that provides base-10 floating-point numbers, which avoid some (although not all) of these issues.  Is there any possibility of numpy having one or more dtypes for base-10 floating-point numbers?I understand fully if a lack of support from underlying libraries makes this infeasible at the present time.  I haven't been able to find much good information on the issue, which leads me to suspect the situation is probably not good. Is there a (hardware or not) fixed-size decimal format? Would that even be useful? IEEE 754-2008 defines 32bit, 64bit, and 128bit floating-point decimal numbers.https://en.wikipedia.org/wiki/Decimal_floating_point#IEEE_754-2008_encoding Numpy's arrays are most useful for working with fixed-size quantities of homogeneous type for which operations are fast and can be carried out without going through python. None of that would appear to be true for decimals, even if one used a C-level decimal library.If it stuck with IEEE decimal floating point numbers then it would still be fixed-size homogeneous data.  But numpy arrays can also be used to contain arbitrary python objects, such as arbitrary-precision numbers, binary or decimal. They won't be all that much faster than lists, but they do make most of numpy's array operations available.Those operations aren't vectorized, which eliminates a lot of the advantage. _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user