[SciPy-User] Can fftconvolve use a faster fft?

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

[SciPy-User] Can fftconvolve use a faster fft?

Neal Becker
Can fftconvolve use fftw, or mkl fft?

(Sorry if this post is a dup, I tried posting via gmane but I don't think it's working)

_______________________________________________
SciPy-User mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: Can fftconvolve use a faster fft?

ralfgommers


On Fri, Jan 12, 2018 at 3:16 AM, Neal Becker <[hidden email]> wrote:
Can fftconvolve use fftw, or mkl fft?


Ralf



_______________________________________________
SciPy-User mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: Can fftconvolve use a faster fft?

FredT
If you use anaconda by default it will install the MKL version of scipy and numpy.

On Jan 11, 2018 11:03 AM, Ralf Gommers <[hidden email]> wrote:


On Fri, Jan 12, 2018 at 3:16 AM, Neal Becker <[hidden email]> wrote:
Can fftconvolve use fftw, or mkl fft?


Ralf




_______________________________________________
SciPy-User mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: Can fftconvolve use a faster fft?

ralfgommers


On Fri, Jan 12, 2018 at 9:18 AM, Frederic Turmel <[hidden email]> wrote:
If you use anaconda by default it will install the MKL version of scipy and numpy.

True, but that won't make scipy or numpy use the MKL FFT capabilities.

We need a switchable backend for this, we have had discussions with one of the Intel MKL engineers on this.

Ralf



On Jan 11, 2018 11:03 AM, Ralf Gommers <[hidden email]> wrote:


On Fri, Jan 12, 2018 at 3:16 AM, Neal Becker <[hidden email]> wrote:
Can fftconvolve use fftw, or mkl fft?


Ralf




_______________________________________________
SciPy-User mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/scipy-user



_______________________________________________
SciPy-User mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: Can fftconvolve use a faster fft?

Gregory Lee
In reply to this post by ralfgommers


On Thu, Jan 11, 2018 at 2:03 PM, Ralf Gommers <[hidden email]> wrote:


On Fri, Jan 12, 2018 at 3:16 AM, Neal Becker <[hidden email]> wrote:
Can fftconvolve use fftw, or mkl fft?


Ralf


Also, note that the pyFFTW scipy interfaces default to "threads=1", so the monkeypatching as listed in the pyFFTW docs may not give a big speed improvement for all transform sizes.  It is likely you will get further speedup if you monkey patch specific functions using functools.partial to change the default threads to a more appropriate value for your system. 



_______________________________________________
SciPy-User mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/scipy-user



_______________________________________________
SciPy-User mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: Can fftconvolve use a faster fft?

FredT
In reply to this post by ralfgommers
I'm confused. I though it was default
See https://www.google.com/amp/s/amp.reddit.com/r/Python/comments/44klx4/anaconda_25_release_now_with_mkl_optimizations/

Benchmark
https://github.com/ContinuumIO/mkl-optimizations-benchmarks/blob/master/README.md

On Jan 11, 2018 12:39 PM, Ralf Gommers <[hidden email]> wrote:


On Fri, Jan 12, 2018 at 9:18 AM, Frederic Turmel <[hidden email]> wrote:
If you use anaconda by default it will install the MKL version of scipy and numpy.

True, but that won't make scipy or numpy use the MKL FFT capabilities.

We need a switchable backend for this, we have had discussions with one of the Intel MKL engineers on this.

Ralf



On Jan 11, 2018 11:03 AM, Ralf Gommers <[hidden email]> wrote:


On Fri, Jan 12, 2018 at 3:16 AM, Neal Becker <[hidden email]> wrote:
Can fftconvolve use fftw, or mkl fft?

Yes, with pyfftw: <a href="https://hgomersall.github.io/pyFFTW/sphinx/tutorial.html?highlight&#61;fftconvolve#monkey-patching-3rd-party-libraries">https://hgomersall.github.io/pyFFTW/sphinx/tutorial.html?highlight=fftconvolve#monkey-patching-3rd-party-libraries

Ralf




_______________________________________________
SciPy-User mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/scipy-user




_______________________________________________
SciPy-User mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: Can fftconvolve use a faster fft?

Neal Becker
I found this:

Not sure if using this (not even sure how) would improve scipy fft_convolve though.

On Thu, Jan 11, 2018 at 10:07 PM Frederic Turmel <[hidden email]> wrote:

On Jan 11, 2018 12:39 PM, Ralf Gommers <[hidden email]> wrote:


On Fri, Jan 12, 2018 at 9:18 AM, Frederic Turmel <[hidden email]> wrote:
If you use anaconda by default it will install the MKL version of scipy and numpy.

True, but that won't make scipy or numpy use the MKL FFT capabilities.

We need a switchable backend for this, we have had discussions with one of the Intel MKL engineers on this.

Ralf



On Jan 11, 2018 11:03 AM, Ralf Gommers <[hidden email]> wrote:


On Fri, Jan 12, 2018 at 3:16 AM, Neal Becker <[hidden email]> wrote:
Can fftconvolve use fftw, or mkl fft?


Ralf




_______________________________________________
SciPy-User mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/scipy-user



_______________________________________________
SciPy-User mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/scipy-user

_______________________________________________
SciPy-User mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: Can fftconvolve use a faster fft?

Alexander Eberspächer
In reply to this post by Gregory Lee
On 11.01.2018 22:04, Gregory Lee wrote:

> Also, note that the pyFFTW scipy interfaces default to "threads=1", so
> the monkeypatching as listed in the pyFFTW docs may not give a big speed
> improvement for all transform sizes.  It is likely you will get further
> speedup if you monkey patch specific functions using functools.partial
> to change the default threads to a more appropriate value for your system. 

Some time ago I've written a small tool [1] which takes care of creating
wrappers around the pyfftw routines (a "wrapper around a wrapper"). The
wrappers are created on module import and inject a number of threads
read from an environment variable.

Maybe you'll find it useful. Please note there's no distutils or
setuptools setup yet, instead a waf-based build described in the readme
is used.

Regards

Alex

[1]: https://github.com/aeberspaecher/transparent_pyfftw


_______________________________________________
SciPy-User mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/scipy-user

signature.asc (484 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Can fftconvolve use a faster fft?

ralfgommers
In reply to this post by Neal Becker


On Sat, Jan 13, 2018 at 2:26 AM, Neal Becker <[hidden email]> wrote:

Ah yes. From memory: because neither NumPy nor SciPy allow switching the implementation, this does some monkeypatching of numpy.fft directly. Probably that's what's shipped in Anaconda then, given the benchmark link below.

Everyone wants to get rid of such monkeypatching though, hence the support for different backends within numpy and(/or) scipy itself is needed.

Ralf




Not sure if using this (not even sure how) would improve scipy fft_convolve though.

On Thu, Jan 11, 2018 at 10:07 PM Frederic Turmel <[hidden email]> wrote:

On Jan 11, 2018 12:39 PM, Ralf Gommers <[hidden email]> wrote:


On Fri, Jan 12, 2018 at 9:18 AM, Frederic Turmel <[hidden email]> wrote:
If you use anaconda by default it will install the MKL version of scipy and numpy.

True, but that won't make scipy or numpy use the MKL FFT capabilities.

We need a switchable backend for this, we have had discussions with one of the Intel MKL engineers on this.

Ralf



On Jan 11, 2018 11:03 AM, Ralf Gommers <[hidden email]> wrote:


On Fri, Jan 12, 2018 at 3:16 AM, Neal Becker <[hidden email]> wrote:
Can fftconvolve use fftw, or mkl fft?


Ralf




_______________________________________________
SciPy-User mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/scipy-user



_______________________________________________
SciPy-User mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/scipy-user

_______________________________________________
SciPy-User mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/scipy-user



_______________________________________________
SciPy-User mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: Can fftconvolve use a faster fft?

Christoph Gohlke


On 1/13/2018 12:50 AM, Ralf Gommers wrote:

>
>
> On Sat, Jan 13, 2018 at 2:26 AM, Neal Becker <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     I found this:
>     https://github.com/IntelPython/mkl_fft
>     <https://github.com/IntelPython/mkl_fft>
>
>
> Ah yes. From memory: because neither NumPy nor SciPy allow switching the
> implementation, this does some monkeypatching of numpy.fft directly.
> Probably that's what's shipped in Anaconda then, given the benchmark
> link below.
>
> Everyone wants to get rid of such monkeypatching though, hence the
> support for different backends within numpy and(/or) scipy itself is needed.
>
> Ralf

Anaconda's numpy is patched to use mkl_fft when available:
<https://github.com/AnacondaRecipes/numpy-feedstock/blob/master/recipe/0001-use-mklfft-when-available.patch>

Christoph


>
>
>
>
>     Not sure if using this (not even sure how) would improve scipy
>     fft_convolve though.
>
>     On Thu, Jan 11, 2018 at 10:07 PM Frederic Turmel <[hidden email]
>     <mailto:[hidden email]>> wrote:
>
>         I'm confused. I though it was default
>         See
>         https://www.google.com/amp/s/amp.reddit.com/r/Python/comments/44klx4/anaconda_25_release_now_with_mkl_optimizations/
>         <https://www.google.com/amp/s/amp.reddit.com/r/Python/comments/44klx4/anaconda_25_release_now_with_mkl_optimizations/>
>
>         Benchmark
>         https://github.com/ContinuumIO/mkl-optimizations-benchmarks/blob/master/README.md
>         <https://github.com/ContinuumIO/mkl-optimizations-benchmarks/blob/master/README.md>
>
>         On Jan 11, 2018 12:39 PM, Ralf Gommers <[hidden email]
>         <mailto:[hidden email]>> wrote:
>
>
>
>             On Fri, Jan 12, 2018 at 9:18 AM, Frederic Turmel
>             <[hidden email] <mailto:[hidden email]>> wrote:
>
>                 If you use anaconda by default it will install the MKL
>                 version of scipy and numpy.
>
>
>             True, but that won't make scipy or numpy use the MKL FFT
>             capabilities.
>
>             We need a switchable backend for this, we have had
>             discussions with one of the Intel MKL engineers on this.
>
>             Ralf
>
>
>
>                 On Jan 11, 2018 11:03 AM, Ralf Gommers
>                 <[hidden email] <mailto:[hidden email]>>
>                 wrote:
>
>
>
>                     On Fri, Jan 12, 2018 at 3:16 AM, Neal Becker
>                     <[hidden email] <mailto:[hidden email]>>
>                     wrote:
>
>                         Can fftconvolve use fftw, or mkl fft?
>
>
>                     Yes, with pyfftw:
>                     https://hgomersall.github.io/pyFFTW/sphinx/tutorial.html?highlight=fftconvolve#monkey-patching-3rd-party-libraries
>                     <https://hgomersall.github.io/pyFFTW/sphinx/tutorial.html?highlight=fftconvolve#monkey-patching-3rd-party-libraries>
>
>                     Ralf
>
>
>
>
>                 _______________________________________________
>                 SciPy-User mailing list
>                 [hidden email] <mailto:[hidden email]>
>                 https://mail.python.org/mailman/listinfo/scipy-user
>                 <https://mail.python.org/mailman/listinfo/scipy-user>
>
>
>
>         _______________________________________________
>         SciPy-User mailing list
>         [hidden email] <mailto:[hidden email]>
>         https://mail.python.org/mailman/listinfo/scipy-user
>         <https://mail.python.org/mailman/listinfo/scipy-user>
>
>
>     _______________________________________________
>     SciPy-User mailing list
>     [hidden email] <mailto:[hidden email]>
>     https://mail.python.org/mailman/listinfo/scipy-user
>     <https://mail.python.org/mailman/listinfo/scipy-user>
>
>
>
>
> _______________________________________________
> SciPy-User mailing list
> [hidden email]
> https://mail.python.org/mailman/listinfo/scipy-user
>
_______________________________________________
SciPy-User mailing list
[hidden email]
https://mail.python.org/mailman/listinfo/scipy-user