I am pleased to announce the release of numpy 1.3.0. You can find source tarballs and binary installers for
both Mac OS X and Windows on the sourceforge page:
The release note for the 1.3.0 release are below. There are no changes compared to the release candidate 2. Thank you to everyone involved in this release, developers, users who reported bugs,
the numpy developers.
NumPy 1.3.0 Release Notes
This minor includes numerous bug fixes, official python 2.6 support, and
several new features such as generalized ufuncs.
Python 2.6 support
Python 2.6 is now supported on all previously supported platforms, including
There is a general need for looping over not only functions on scalars but also
over functions on vectors (or arrays), as explained on
http://scipy.org/scipy/numpy/wiki/GeneralLoopingFunctions. We propose to
realize this concept by generalizing the universal functions (ufuncs), and
provide a C implementation that adds ~500 lines to the numpy code base. In
current (specialized) ufuncs, the elementary function is limited to
element-by-element operations, whereas the generalized version supports
"sub-array" by "sub-array" operations. The Perl vector library PDL provides a
similar functionality and its terms are re-used in the following.
Each generalized ufunc has information associated with it that states what the
"core" dimensionality of the inputs is, as well as the corresponding
dimensionality of the outputs (the element-wise ufuncs have zero core
dimensions). The list of the core dimensions for all arguments is called the
"signature" of a ufunc. For example, the ufunc numpy.add has signature
"(),()->()" defining two scalar inputs and one scalar output.
Another example is (see the GeneralLoopingFunctions page) the function
inner1d(a,b) with a signature of "(i),(i)->()". This applies the inner product
along the last axis of each input, but keeps the remaining indices intact. For
example, where a is of shape (3,5,N) and b is of shape (5,N), this will return
an output of shape (3,5). The underlying elementary function is called 3*5
times. In the signature, we specify one core dimension "(i)" for each input and
zero core dimensions "()" for the output, since it takes two 1-d arrays and
returns a scalar. By using the same name "i", we specify that the two
corresponding dimensions should be of the same size (or one of them is of size
1 and will be broadcasted).
The dimensions beyond the core dimensions are called "loop" dimensions. In the
above example, this corresponds to (3,5).
The usual numpy "broadcasting" rules apply, where the signature determines how
the dimensions of each input/output object are split into core and loop
While an input array has a smaller dimensionality than the corresponding number
of core dimensions, 1's are pre-pended to its shape. The core dimensions are
removed from all inputs and the remaining dimensions are broadcasted; defining
the loop dimensions. The output is given by the loop dimensions plus the
output core dimensions.
Experimental Windows 64 bits support
Numpy can now be built on windows 64 bits (amd64 only, not IA64), with both MS
compilers and mingw-w64 compilers:
This is *highly experimental*: DO NOT USE FOR PRODUCTION USE. See INSTALL.txt,
Windows 64 bits section for more information on limitations and how to build it
Float formatting is now handled by numpy instead of the C runtime: this enables
locale independent formatting, more robust fromstring and related methods.
Special values (inf and nan) are also more consistent across platforms (nan vs
IND/NaN, etc...), and more consistent with recent python formatting work (in
2.6 and later).
Nan handling in max/min
The maximum/minimum ufuncs now reliably propagate nans. If one of the
arguments is a nan, then nan is retured. This affects np.min/np.max, amin/amax
and the array methods max/min. New ufuncs fmax and fmin have been added to deal
with non-propagating nans.
Nan handling in sign
The ufunc sign now returns nan for the sign of anan.
#. fmax - same as maximum for integer types and non-nan floats. Returns the
non-nan argument if one argument is nan and returns nan if both arguments
#. fmin - same as minimum for integer types and non-nan floats. Returns the
non-nan argument if one argument is nan and returns nan if both arguments
#. deg2rad - converts degrees to radians, same as the radians ufunc.
#. rad2deg - converts radians to degrees, same as the degrees ufunc.
#. log2 - base 2 logarithm.
#. exp2 - base 2 exponential.
#. trunc - truncate floats to nearest integer towards zero.
#. logaddexp - add numbers stored as logarithms and return the logarithm
of the result.
#. logaddexp2 - add numbers stored as base 2 logarithms and return the base 2
logarithm of the result result.
Several new features and bug fixes, including:
* structured arrays should now be fully supported by MaskedArray
(r6463, r6324, r6305, r6300, r6294...)
* Minor bug fixes (r6356, r6352, r6335, r6299, r6298)
* Improved support for __iter__ (r6326)
* made baseclass, sharedmask and hardmask accesible to the user (but
* doc update
gfortran support on windows
Gfortran can now be used as a fortran compiler for numpy on windows, even when
the C compiler is Visual Studio (VS 2005 and above; VS 2003 will NOT work).
Gfortran + Visual studio does not work on windows 64 bits (but gcc + gfortran
does). It is unclear whether it will be possible to use gfortran and visual
studio at all on x64.
Arch option for windows binary
Automatic arch detection can now be bypassed from the command line for the superpack installed:
will install a numpy which works on any x86, even if the running computer
supports SSE set.
The semantics of histogram has been modified to fix long-standing issues
with outliers handling. The main changes concern
#. the definition of the bin edges, now including the rightmost edge, and
#. the handling of upper outliers, now ignored rather than tallied in the
The previous behavior is still accessible using `new=False`, but this is
deprecated, and will be removed entirely in 1.4.0.
A lot of documentation has been added. Both user guide and references can be
built from sphinx.
New C API
The following functions have been added to the multiarray C API:
* PyArray_GetEndianness: to get runtime endianness
The following functions have been added to the ufunc API:
* PyUFunc_FromFuncAndDataAndSignature: to declare a more general ufunc
New public C defines are available for ARCH specific code through numpy/npy_cpu.h:
* NPY_CPU_X86: x86 arch (32 bits)
* NPY_CPU_AMD64: amd64 arch (x86_64, NOT Itanium)
* NPY_CPU_PPC: 32 bits ppc
* NPY_CPU_PPC64: 64 bits ppc
* NPY_CPU_SPARC: 32 bits sparc
* NPY_CPU_SPARC64: 64 bits sparc
* NPY_CPU_S390: S390
* NPY_CPU_IA64: ia64
* NPY_CPU_PARISC: PARISC
New macros for CPU endianness has been added as well (see internal changes
below for details):
* NPY_BYTE_ORDER: integer
* NPY_LITTLE_ENDIAN/NPY_BIG_ENDIAN defines
Those provide portable alternatives to glibc endian.h macros for platforms
Portable NAN, INFINITY, etc...
npy_math.h now makes available several portable macro to get NAN, INFINITY:
* NPY_NAN: equivalent to NAN, which is a GNU extension
* NPY_INFINITY: equivalent to C99 INFINITY
* NPY_PZERO, NPY_NZERO: positive and negative zero respectively
Corresponding single and extended precision macros are available as well. All
references to NAN, or home-grown computation of NAN on the fly have been
removed for consistency.
numpy.core math configuration revamp
This should make the porting to new platforms easier, and more robust. In
particular, the configuration stage does not need to execute any code on the
target platform, which is a first step toward cross-compilation.
A lot of code cleanup for umath/ufunc code (charris).
Improvements to build warnings
Numpy can now build with -W -Wall without warnings
Separate core math library
The core math functions (sin, cos, etc... for basic C types) have been put into
a separate library; it acts as a compatibility layer, to support most C99 maths
functions (real only for now). The library includes platform-specific fixes for
various maths functions, such as using those versions should be more robust
than using your platform functions directly. The API for existing functions is
exactly the same as the C99 math functions API; the only difference is the npy
prefix (npy_cos vs cos).
The core library will be made available to any extension in 1.4.0.
CPU arch detection
npy_cpu.h defines numpy specific CPU defines, such as NPY_CPU_X86, etc...
Those are portable across OS and toolchains, and set up when the header is
parsed, so that they can be safely used even in the case of cross-compilation
(the values is not set when numpy is built), or for multi-arch binaries (e.g.
fat binaries on Max OS X).
npy_endian.h defines numpy specific endianness defines, modeled on the glibc
endian.h. NPY_BYTE_ORDER is equivalent to BYTE_ORDER, and one of
NPY_LITTLE_ENDIAN or NPY_BIG_ENDIAN is defined. As for CPU archs, those are set
when the header is parsed by the compiler, and as such can be used for
cross-compilation and multi-arch binaries.
SciPy-user mailing list
thanks to all developers and the releasemanager who made this possible!
No I can keep up with the development version other packages that were
already using the 1.3.* devel version.
> I am pleased to announce the release of numpy 1.3.0. You can find source tarballs and binary installers for
> both Mac OS X and Windows on the sourceforge page:
I tried to upgrade with easy_install but it failed:
C:\>easy_install --upgrade numpy
Searching for numpy
Best match: numpy 1.2.1
Adding numpy 1.2.1 to easy-install.pth file
Processing dependencies for numpy
Finished processing dependencies for numpy
C:\>easy_install --upgrade "numpy==1.3.0"
Searching for numpy==1.3.0
No local packages or download links found for numpy==1.3.0
error: Could not find suitable distribution for
Seems that I will have to wait until PythonXY gets updated.
SciPy-user mailing list
|Free forum by Nabble||Edit this page|