variable smoothing kernel

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

variable smoothing kernel

Wolfgang Kerzendorf
Hello,

I'm interested in having a gaussian smoothing where the kernel depends
(linearly in this case) on the index where it is operating on. I
implemented it myself (badly probably) and it takes for ever, compared
to the gaussian smoothing with a fixed kernel in ndimage.

I could interpolate the array to be smoothed onto a log space and not
change the kernel, but that is complicated and I'd rather avoid it.

Is there a good way of doing that?

Cheers
     Wolfgang
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: variable smoothing kernel

Chris Barker - NOAA Federal
On 3/26/11 5:52 AM, Wolfgang Kerzendorf wrote:
> I'm interested in having a gaussian smoothing where the kernel depends
> (linearly in this case) on the index where it is operating on. I
> implemented it myself (badly probably) and it takes for ever, compared
> to the gaussian smoothing with a fixed kernel in ndimage.

I don't know of any code that does this out of the box.

If you post you code here, folks may be able suggest ways to improve the
performance.

There is a limit to how fast you can do this with pure python, if you
want do better than that, give Cython a try -- this would be pretty easy
to write with Cython:

http://wiki.cython.org/tutorials/numpy

-Chris

--
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

[hidden email]
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: variable smoothing kernel

Nicolau Werneck
In reply to this post by Wolfgang Kerzendorf
If I understand correctly, you want a filter that varies on "time".
This non-linearity will cause it to be inherently more complicated to
calculate than a normal linear time-invariant filter.

I second Christopher's suggestion, try Cython out, it's great for this
kind of thing. Or perhaps scipy.weave.

++nic

On Sat, Mar 26, 2011 at 9:52 AM, Wolfgang Kerzendorf
<[hidden email]> wrote:

> Hello,
>
> I'm interested in having a gaussian smoothing where the kernel depends
> (linearly in this case) on the index where it is operating on. I
> implemented it myself (badly probably) and it takes for ever, compared
> to the gaussian smoothing with a fixed kernel in ndimage.
>
> I could interpolate the array to be smoothed onto a log space and not
> change the kernel, but that is complicated and I'd rather avoid it.
>
> Is there a good way of doing that?
>
> Cheers
>     Wolfgang
> _______________________________________________
> SciPy-User mailing list
> [hidden email]
> http://mail.scipy.org/mailman/listinfo/scipy-user
>



--
Nicolau Werneck <[hidden email]>         C3CF E29F 5350 5DAA 3705
http://www.lti.pcs.usp.br/~nwerneck                   7B9E D6C4 37BB DA64 6F15
Linux user #460716
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: variable smoothing kernel

Wolfgang Kerzendorf
Well, your time is my wavelength. It should vary on wavelength.

I agree implementing it with cython makes it probably faster. I do
suspect however, that it won't be as fast as the normal smoothing function.

I have learned that multiplying functions in fourier space is the same
as convoluting them. I believe that is how the ndimage kernels work so
incredibly fast.
I wanted to see if there's a similar shortcut for a variable kernel.

I have copied my previous attempts (which were very simply written and
take a long time) into this pastebin: http://pastebin.com/KkcEATs7

Thanks for your help
      Wolfgang
On 27/03/11 12:09 PM, Nicolau Werneck wrote:

> If I understand correctly, you want a filter that varies on "time".
> This non-linearity will cause it to be inherently more complicated to
> calculate than a normal linear time-invariant filter.
>
> I second Christopher's suggestion, try Cython out, it's great for this
> kind of thing. Or perhaps scipy.weave.
>
> ++nic
>
> On Sat, Mar 26, 2011 at 9:52 AM, Wolfgang Kerzendorf
> <[hidden email]>  wrote:
>> Hello,
>>
>> I'm interested in having a gaussian smoothing where the kernel depends
>> (linearly in this case) on the index where it is operating on. I
>> implemented it myself (badly probably) and it takes for ever, compared
>> to the gaussian smoothing with a fixed kernel in ndimage.
>>
>> I could interpolate the array to be smoothed onto a log space and not
>> change the kernel, but that is complicated and I'd rather avoid it.
>>
>> Is there a good way of doing that?
>>
>> Cheers
>>      Wolfgang
>> _______________________________________________
>> SciPy-User mailing list
>> [hidden email]
>> http://mail.scipy.org/mailman/listinfo/scipy-user
>>
>
>

_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: variable smoothing kernel

Nicolau Werneck
On Sun, Mar 27, 2011 at 12:54:58PM +1100, Wolfgang Kerzendorf wrote:
> Well, your time is my wavelength. It should vary on wavelength.

OK, but what kind of data do you have? It is a 1-dimensional signal,
and you have taken its Fourier transform and now you are filtering in
the frequency domain?...
 
> I agree implementing it with cython makes it probably faster. I do
> suspect however, that it won't be as fast as the normal smoothing function.
>
> I have learned that multiplying functions in fourier space is the same
> as convoluting them. I believe that is how the ndimage kernels work so
> incredibly fast.
> I wanted to see if there's a similar shortcut for a variable kernel.

Implementing in Cython will make it _definitely_ better!... Whenever
you have large loops running over vectors or arrays Cython will give
you great speedups.

And as I was saying later, it will never be as fast because applying a
linear filter is something inherently easier... Because you can use the FFT
and multiply in the transform domain, as you said.

In your case maybe you could consider to filter the signal with a
filter bank, and then pick up the values from the result according to
the formula you use for calculating your kernel. It may or may not be
quicker, but it's not possible if you need infinite precision in the
parameters of your filter.

> I have copied my previous attempts (which were very simply written and
> take a long time) into this pastebin: http://pastebin.com/KkcEATs7

Thanks for sending it... But it's not clear to me how it works in a
first glance. Can you send a small sample with a synthetic signal
(randn, whatever) showing how to run the procedures?

And question: is there any chance you could in your problem first
apply some mapping of your signal, a change of variables (like
x->log(x) )and then apply a normal linear time-invariant filter with
this transform, and then apply the inverse transform? In that case you
would first use interpolation to perform the mapping, then apply the
fast filtering procedure, and do the inverse interpolation...


++nic


>
> Thanks for your help
>       Wolfgang
> On 27/03/11 12:09 PM, Nicolau Werneck wrote:
> > If I understand correctly, you want a filter that varies on "time".
> > This non-linearity will cause it to be inherently more complicated to
> > calculate than a normal linear time-invariant filter.
> >
> > I second Christopher's suggestion, try Cython out, it's great for this
> > kind of thing. Or perhaps scipy.weave.
> >
> > ++nic
> >
> > On Sat, Mar 26, 2011 at 9:52 AM, Wolfgang Kerzendorf
> > <[hidden email]>  wrote:
> >> Hello,
> >>
> >> I'm interested in having a gaussian smoothing where the kernel depends
> >> (linearly in this case) on the index where it is operating on. I
> >> implemented it myself (badly probably) and it takes for ever, compared
> >> to the gaussian smoothing with a fixed kernel in ndimage.
> >>
> >> I could interpolate the array to be smoothed onto a log space and not
> >> change the kernel, but that is complicated and I'd rather avoid it.
> >>
> >> Is there a good way of doing that?
> >>
> >> Cheers
> >>      Wolfgang
> >> _______________________________________________
> >> SciPy-User mailing list
> >> [hidden email]
> >> http://mail.scipy.org/mailman/listinfo/scipy-user
> >>
> >
> >
>
> _______________________________________________
> SciPy-User mailing list
> [hidden email]
> http://mail.scipy.org/mailman/listinfo/scipy-user

--
Nicolau Werneck <[hidden email]>          C3CF E29F 5350 5DAA 3705
http://www.lti.pcs.usp.br/~nwerneck           7B9E D6C4 37BB DA64 6F15
Linux user #460716
"A huge gap exists between what we know is possible with today's machines and what we have so far been able to finish."
-- Donald Knuth

_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: variable smoothing kernel

Wolfgang Kerzendorf
So I have now considered interpolating on a logarithmic spacing. This is
very fast (spectrum with 8000 points > 1 ms). I think for now this is a
good method. I have also checked the errors associated with
interpolating to log space and back. They seem to be relatively small.
Thanks for all your suggestions and help.

Cheers
    Wolfgang
On 27/03/11 2:01 PM, Nicolau Werneck wrote:

> On Sun, Mar 27, 2011 at 12:54:58PM +1100, Wolfgang Kerzendorf wrote:
>> Well, your time is my wavelength. It should vary on wavelength.
> OK, but what kind of data do you have? It is a 1-dimensional signal,
> and you have taken its Fourier transform and now you are filtering in
> the frequency domain?...
>
>> I agree implementing it with cython makes it probably faster. I do
>> suspect however, that it won't be as fast as the normal smoothing function.
>>
>> I have learned that multiplying functions in fourier space is the same
>> as convoluting them. I believe that is how the ndimage kernels work so
>> incredibly fast.
>> I wanted to see if there's a similar shortcut for a variable kernel.
> Implementing in Cython will make it _definitely_ better!... Whenever
> you have large loops running over vectors or arrays Cython will give
> you great speedups.
>
> And as I was saying later, it will never be as fast because applying a
> linear filter is something inherently easier... Because you can use the FFT
> and multiply in the transform domain, as you said.
>
> In your case maybe you could consider to filter the signal with a
> filter bank, and then pick up the values from the result according to
> the formula you use for calculating your kernel. It may or may not be
> quicker, but it's not possible if you need infinite precision in the
> parameters of your filter.
>
>> I have copied my previous attempts (which were very simply written and
>> take a long time) into this pastebin: http://pastebin.com/KkcEATs7
> Thanks for sending it... But it's not clear to me how it works in a
> first glance. Can you send a small sample with a synthetic signal
> (randn, whatever) showing how to run the procedures?
>
> And question: is there any chance you could in your problem first
> apply some mapping of your signal, a change of variables (like
> x->log(x) )and then apply a normal linear time-invariant filter with
> this transform, and then apply the inverse transform? In that case you
> would first use interpolation to perform the mapping, then apply the
> fast filtering procedure, and do the inverse interpolation...
>
>
> ++nic
>
>
>> Thanks for your help
>>        Wolfgang
>> On 27/03/11 12:09 PM, Nicolau Werneck wrote:
>>> If I understand correctly, you want a filter that varies on "time".
>>> This non-linearity will cause it to be inherently more complicated to
>>> calculate than a normal linear time-invariant filter.
>>>
>>> I second Christopher's suggestion, try Cython out, it's great for this
>>> kind of thing. Or perhaps scipy.weave.
>>>
>>> ++nic
>>>
>>> On Sat, Mar 26, 2011 at 9:52 AM, Wolfgang Kerzendorf
>>> <[hidden email]>   wrote:
>>>> Hello,
>>>>
>>>> I'm interested in having a gaussian smoothing where the kernel depends
>>>> (linearly in this case) on the index where it is operating on. I
>>>> implemented it myself (badly probably) and it takes for ever, compared
>>>> to the gaussian smoothing with a fixed kernel in ndimage.
>>>>
>>>> I could interpolate the array to be smoothed onto a log space and not
>>>> change the kernel, but that is complicated and I'd rather avoid it.
>>>>
>>>> Is there a good way of doing that?
>>>>
>>>> Cheers
>>>>       Wolfgang
>>>> _______________________________________________
>>>> SciPy-User mailing list
>>>> [hidden email]
>>>> http://mail.scipy.org/mailman/listinfo/scipy-user
>>>>
>>>
>> _______________________________________________
>> SciPy-User mailing list
>> [hidden email]
>> http://mail.scipy.org/mailman/listinfo/scipy-user

_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: variable smoothing kernel

Sturla Molden-2
In reply to this post by Wolfgang Kerzendorf
Den 27.03.2011 03:54, skrev Wolfgang Kerzendorf:
> I have learned that multiplying functions in fourier space is the same
> as convoluting them.

Yes.

The best strategy is to do this chunk-wise, however, instead of FFT'ing
the whole signal. This leads tot he so-called "overlap-and-add" method,
used by e.g. scipy.signal.fftfilt. fftfilt tries to guess the optimum
chunk-size to use for filtering.

For short FIR-filters it will be faster to filter in the time-domain.

For moving average filters one can use a very fast recursive filter.
Bartlet filter can be implemented by appying MA twice, Gaussian can be
approximated by applying MA four times. (A better IIR-approximation for
the Gaussian is available, howver, see below.)


> I believe that is how the ndimage kernels work so
> incredibly fast.

No. They truncate the Gaussian, which is why compute time depends on
filter size. Cf. the comment on short FIR-filters above.

For the Gaussian it is possible to use a recursive IIR filter instead,
for which compute time does not depend on filter size.



> I wanted to see if there's a similar shortcut for a variable kernel.

Not in general, because the filter will be non-linear.


Sturla





_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user

_gaussian.c (12K) Download Attachment
gaussian.h (102 bytes) Download Attachment
gaussian.pyx (3K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: variable smoothing kernel

Wolfgang Kerzendorf
I guess by FIR-filters you don't mean Far InfraRed ;-). I guess if I describe my specific case in more detail than that might help:
My Signal is an optical spectrum which has nm on the x axis and intensity on the y. I am trying to simulate the effects of the optical properties of a real spectrograph on synthetic spectra. Basically we describe this as the resolution (R) = lambda/(fwhm(lambda). That's why I have a variable kernel size. So for the moment I interpolate the evenly linear spaced spectrum on log space spectrum (I use splrep and splev with k=1, rather than interp1d. I hope that's a good idea) and then calculate the kernel size and use ndimage.gaussian_filter1d.

Ah while we are on the topic: I sometimes have synthetic spectra which are irregularly sampled. This means what I am doing is resampling it on a linear grid using smallest wavelength delta (which can be quite small). This obviously expands the spectra to many times its original size. Then I would resample it on a log spacing where the largest delta is equal to the smallest delta on the linear scale. This makes it even larger. There is probably a much smarter way.

I should also mention that my knowledge about signal processing is limited.

Your help is very much appreciated,
     Wolfgang
     
On 27/03/11 11:24 PM, Sturla Molden wrote:
Den 27.03.2011 03:54, skrev Wolfgang Kerzendorf:
I have learned that multiplying functions in fourier space is the same
as convoluting them.

Yes.

The best strategy is to do this chunk-wise, however, instead of FFT'ing the whole signal. This leads tot he so-called "overlap-and-add" method, used by e.g. scipy.signal.fftfilt. fftfilt tries to guess the optimum chunk-size to use for filtering.

For short FIR-filters it will be faster to filter in the time-domain.

For moving average filters one can use a very fast recursive filter. Bartlet filter can be implemented by appying MA twice, Gaussian can be approximated by applying MA four times. (A better IIR-approximation for the Gaussian is available, howver, see below.)


I believe that is how the ndimage kernels work so
incredibly fast.

No. They truncate the Gaussian, which is why compute time depends on filter size. Cf. the comment on short FIR-filters above.

For the Gaussian it is possible to use a recursive IIR filter instead, for which compute time does not depend on filter size.



I wanted to see if there's a similar shortcut for a variable kernel.

Not in general, because the filter will be non-linear.


Sturla




_______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user


_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|

Re: variable smoothing kernel

Sturla Molden-2
Den 27.03.2011 15:41, skrev Wolfgang Kerzendorf:
I guess by FIR-filters you don't mean Far InfraRed ;-).

Finite Impulse Response :)


Sturla

_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user