Quantcast

scipy.interpolate.rbf sensitive to input noise ?

classic Classic list List threaded Threaded
17 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

scipy.interpolate.rbf sensitive to input noise ?

denis-bz-gg
A followup to the 15feb thread "creating a 3D surface plot from
collected data":
the doc for scipy.interpolate.rbf says that 3 of the 7 radial
functions
use a scale paramater `epsilon`, 4 do not.
Odd -- how can some be scale-free ?
Running rbf on 100 1d random.uniform input points
(after changing the linalg.solve in rbf.py to lstsq)
shows that all but linear and gaussian are noisy,
giving interpolants way outside the input range:

    N: 100  ngrid: 50  sigma: 0.1
    # min  max  max |delta|  Rbf
     -1.0  1.0   0.5  gaussian
     -1.0  1.2   0.3  linear
     -1.4  2.1   2.5  thin-plate
     -1.0  2.2   3.0  inverse multiquadric
     -1.0  2.4   3.2  multiquadric
     -2.1 12.8   7.4  quintic
    -10.6  7.0  11.7  cubic

Should Rbf be moved off to noisymethods/...  until experts can look it
over,
or have I done something stupid ?
(A back-of-the-envelope method for choosing epsilon aka r0 would be
nice,
and a rough noise amplification model would be nice too.)

Also, the doc should have a link to
http://www.scipy.org/Cookbook/Matplotlib/Gridding_irregularly_spaced_data
and should say that the initial rbf() is N^2.

cheers
  -- denis

<pre><code>
    # from http://scipy.org/Cookbook/RadialBasisFunctions
    import sys
    import numpy as np
    from rbf import Rbf  # lstsq
    # from scipy.interpolate import Rbf
    Rbfuncs = ( 'gaussian', 'linear', 'thin-plate', 'inverse
multiquadric',
        'multiquadric', 'quintic', 'cubic', )

    N = 100
    ngrid = 50
    sigma = .1
    exec "\n".join( sys.argv[1:] )  # N=
    np.random.seed(1)
    np.set_printoptions( 2, threshold=50, edgeitems=5,
suppress=True )  # .2f

    x = np.sort( np.random.uniform( 0, 10, N+1 ))
    y = np.sin(x) + np.random.normal( 0, sigma, x.shape )
    xi = np.linspace( 0, 10, ngrid+1 )

    print "N: %d  ngrid: %d  sigma: %.2g" % (N, ngrid, sigma)
    print "# min  max  max |delta|  Rbf"
    for func in Rbfuncs:
        rbf = Rbf( x, y, function=func )
        fi = rbf(xi)
        maxdiff = np.amax( np.abs( np.diff( fi )))
        print "%5.1f %4.1f  %4.1f  %s" % (
            fi.min(), fi.max(), maxdiff, func )
</pre></code>
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: scipy.interpolate.rbf sensitive to input noise ?

Robert Kern-2
On Thu, Feb 18, 2010 at 09:19, denis <[hidden email]> wrote:
> A followup to the 15feb thread "creating a 3D surface plot from
> collected data":
> the doc for scipy.interpolate.rbf says that 3 of the 7 radial
> functions
> use a scale paramater `epsilon`, 4 do not.
> Odd -- how can some be scale-free ?
> Running rbf on 100 1d random.uniform input points
> (after changing the linalg.solve in rbf.py to lstsq)

Why would you do this? This does not magically turn Rbf interpolation
into a smoothing approximation. Use the "smooth" keyword to specify a
smoothing parameter.

> shows that all but linear and gaussian are noisy,
> giving interpolants way outside the input range:
>
>    N: 100  ngrid: 50  sigma: 0.1
>    # min  max  max |delta|  Rbf
>     -1.0  1.0   0.5  gaussian
>     -1.0  1.2   0.3  linear
>     -1.4  2.1   2.5  thin-plate
>     -1.0  2.2   3.0  inverse multiquadric
>     -1.0  2.4   3.2  multiquadric
>     -2.1 12.8   7.4  quintic
>    -10.6  7.0  11.7  cubic
>
> Should Rbf be moved off to noisymethods/...  until experts can look it
> over,
> or have I done something stupid ?

I'm not sure what else you would expect from interpolating uniform
random noise. Many interpolation methods tend to give such
oscillations on such data.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: scipy.interpolate.rbf sensitive to input noise ?

denis-bz-gg
On Feb 18, 4:48 pm, Robert Kern <[hidden email]> wrote:
> On Thu, Feb 18, 2010 at 09:19, denis <[hidden email]> wrote:

> > Running rbf on 100 1d random.uniform input points
> > (after changing the linalg.solve in rbf.py to lstsq)
>
> Why would you do this? This does not magically turn Rbf interpolation
> into a smoothing approximation. Use the "smooth" keyword to specify a
> smoothing parameter.

Robert,
  i'm interpolating  y = np.sin(x) + np.random.normal( 0,  .1 ), not
random noise.
The idea was to look at gaussian vs thin-plate;
true, that 1d snippet doesn't say much,
but my 2d plots were so noisy that I went down to 1d.

Use "smooth" ? rbf.py just does
    self.A = self._function(r) - eye(self.N)*self.smooth
and you don't know A .

Bytheway (googling), http://www.farfieldtechnology.com/products/toolbox
...
have an O(N lnN) FastRBF TM in matlab, $

cheers
  -- denis
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: scipy.interpolate.rbf sensitive to input noise ?

Robert Kern-2
On Fri, Feb 19, 2010 at 10:26, denis <[hidden email]> wrote:

> On Feb 18, 4:48 pm, Robert Kern <[hidden email]> wrote:
>> On Thu, Feb 18, 2010 at 09:19, denis <[hidden email]> wrote:
>
>> > Running rbf on 100 1d random.uniform input points
>> > (after changing the linalg.solve in rbf.py to lstsq)
>>
>> Why would you do this? This does not magically turn Rbf interpolation
>> into a smoothing approximation. Use the "smooth" keyword to specify a
>> smoothing parameter.
>
> Robert,
>  i'm interpolating  y = np.sin(x) + np.random.normal( 0,  .1 ), not
> random noise.

Okay. You said otherwise. It would help if you simply attached the
code that you are using.

> The idea was to look at gaussian vs thin-plate;
> true, that 1d snippet doesn't say much,
> but my 2d plots were so noisy that I went down to 1d.
>
> Use "smooth" ? rbf.py just does
>    self.A = self._function(r) - eye(self.N)*self.smooth
> and you don't know A .

I have no idea what you mean by the last part of your sentence. Did
you actually try using the smooth keyword? What were your results?

> Bytheway (googling), http://www.farfieldtechnology.com/products/toolbox
> ...
> have an O(N lnN) FastRBF TM in matlab, $

Okay. And?

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: scipy.interpolate.rbf sensitive to input noise ?

josef.pktd
In reply to this post by denis-bz-gg
On Fri, Feb 19, 2010 at 11:26 AM, denis <[hidden email]> wrote:

> On Feb 18, 4:48 pm, Robert Kern <[hidden email]> wrote:
>> On Thu, Feb 18, 2010 at 09:19, denis <[hidden email]> wrote:
>
>> > Running rbf on 100 1d random.uniform input points
>> > (after changing the linalg.solve in rbf.py to lstsq)
>>
>> Why would you do this? This does not magically turn Rbf interpolation
>> into a smoothing approximation. Use the "smooth" keyword to specify a
>> smoothing parameter.
>
> Robert,
>  i'm interpolating  y = np.sin(x) + np.random.normal( 0,  .1 ), not
> random noise.
> The idea was to look at gaussian vs thin-plate;
> true, that 1d snippet doesn't say much,
> but my 2d plots were so noisy that I went down to 1d.
>
> Use "smooth" ? rbf.py just does
>    self.A = self._function(r) - eye(self.N)*self.smooth
> and you don't know A .

rbf is a global interpolator, and as Robert said for some cases you
get very unstable results.

with a large number of points self._function(r)  is not very well
behaved and you need the penalization to make it smoother. In a
variation of your example, I printed out the eigenvalues for self.A
and they don't look nice without penalization.
There are still some strange things that I don't understand with the
behavior rbf, but when I looked at it in the past, I got better
results by using only local information, i.e. fit rbf only to a
neighborhood of the points, although I think thinplate did pretty well
also with a larger number of points.

Josef

>
> Bytheway (googling), http://www.farfieldtechnology.com/products/toolbox
> ...
> have an O(N lnN) FastRBF TM in matlab, $
>
> cheers
>  -- denis
> _______________________________________________
> SciPy-User mailing list
> [hidden email]
> http://mail.scipy.org/mailman/listinfo/scipy-user
>
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: scipy.interpolate.rbf sensitive to input noise ?

denis-bz-gg
On Feb 19, 5:41 pm, [hidden email] wrote:
> On Fri, Feb 19, 2010 at 11:26 AM, denis <[hidden email]> wrote:

> > Use "smooth" ? rbf.py just does
> >    self.A = self._function(r) - eye(self.N)*self.smooth
> > and you don't know A .

That's a line from scipy/interpolate/rbf.py: it solves
    (A - smooth*I)x = b  instead of
    Ax = b
Looks to me like a hack for A singular, plus the caller doesn't know A
anyway.

I had looked at the eigenvalues too (100 points, 2d, like
test_rbf.py):
gauss     : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1.2e-10 ... 44
linear    : -1.8e+02 ... -0.027 5.6e+02
thinplate : -4.7e+03 ... -5.4e+02 0.0032

So gauss is singular => don't do that then.
(Odd that linalg.solve didn't LinAlgError though.)

> rbf is a global interpolator, and as Robert said for some cases you
> get very unstable results.
>
> with a large number of points self._function(r)  is not very well
> behaved and you need the penalization to make it smoother. In a
> variation of your example, I printed out the eigenvalues for self.A
> and they don't look nice without penalization.
> There are still some strange things that I don't understand with the
> behavior rbf, but when I looked at it in the past, I got better
> results by using only local information, i.e. fit rbf only to a
> neighborhood of the points, although I think thinplate did pretty well
> also with a larger number of points.

Yes -- but you don't know which points are local
without some k-nearest-neighbor algorithm ? in 2d, might as well
triangulate.
Gaussian weights nearer points automatically BUT rbf gauss looks to me
singular / unusable.

cheers
  -- denis
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: scipy.interpolate.rbf sensitive to input noise ?

josef.pktd
On Mon, Feb 22, 2010 at 7:35 AM, denis <[hidden email]> wrote:

> On Feb 19, 5:41 pm, [hidden email] wrote:
>> On Fri, Feb 19, 2010 at 11:26 AM, denis <[hidden email]> wrote:
>
>> > Use "smooth" ? rbf.py just does
>> >    self.A = self._function(r) - eye(self.N)*self.smooth
>> > and you don't know A .
>
> That's a line from scipy/interpolate/rbf.py: it solves
>    (A - smooth*I)x = b  instead of
>    Ax = b
> Looks to me like a hack for A singular, plus the caller doesn't know A
> anyway.

It's not a hack it's a requirement, ill-posed inverse problems need
penalization, this is just Ridge or Tychonov with a kernel matrix. A
is (nobs,nobs) and the number of features is always the same as the
number of observations that are used. (I was looking at "Kernel Ridge
Regression" and "Gaussian Process" before I realized that rbf is
essentially the same, at least for 'gauss')
I don't know anything about thinplate.

I still don't understand what you mean with "the caller doesn't know
A".  A is the internally calculated kernel matrix (if I remember
correctly.)


>
> I had looked at the eigenvalues too (100 points, 2d, like
> test_rbf.py):
> gauss     : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
> 1.2e-10 ... 44
> linear    : -1.8e+02 ... -0.027 5.6e+02
> thinplate : -4.7e+03 ... -5.4e+02 0.0032
>
> So gauss is singular => don't do that then.
> (Odd that linalg.solve didn't LinAlgError though.)
>
>> rbf is a global interpolator, and as Robert said for some cases you
>> get very unstable results.
>>
>> with a large number of points self._function(r)  is not very well
>> behaved and you need the penalization to make it smoother. In a
>> variation of your example, I printed out the eigenvalues for self.A
>> and they don't look nice without penalization.
>> There are still some strange things that I don't understand with the
>> behavior rbf, but when I looked at it in the past, I got better
>> results by using only local information, i.e. fit rbf only to a
>> neighborhood of the points, although I think thinplate did pretty well
>> also with a larger number of points.
>
> Yes -- but you don't know which points are local
> without some k-nearest-neighbor algorithm ? in 2d, might as well
> triangulate.

I used scipy.spatial for this, or if there are not too many points use
the full dense distance matrix setting far away points to zero (kind
of). My example script should be on the mailing list.
rbf works for nd not just 1d or 2d, I think only the number of
observations is important, not the number of features.

> Gaussian weights nearer points automatically BUT rbf gauss looks to me
> singular / unusable.

unusable without large enough smooth>0 , e.g. I tried smooth=0.4 in bad cases.

Cheers,

Josef

>
> cheers
>  -- denis
> _______________________________________________
> SciPy-User mailing list
> [hidden email]
> http://mail.scipy.org/mailman/listinfo/scipy-user
>
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: scipy.interpolate.rbf sensitive to input noise ?

denis-bz-gg
On Feb 22, 3:08 pm, [hidden email] wrote:

> On Mon, Feb 22, 2010 at 7:35 AM, denis <[hidden email]> wrote:
> > On Feb 19, 5:41 pm, [hidden email] wrote:
> >> On Fri, Feb 19, 2010 at 11:26 AM, denis <[hidden email]> wrote:
>
> >> > Use "smooth" ? rbf.py just does
> >> >    self.A = self._function(r) - eye(self.N)*self.smooth
> >> > and you don't know A .
>
> > That's a line from scipy/interpolate/rbf.py: it solves
> >    (A - smooth*I)x = b  instead of
> >    Ax = b
> > Looks to me like a hack for A singular, plus the caller doesn't know A
> > anyway.

> It's not a hack it's a requirement, ill-posed inverse problems need

OK, I must be wrong; but (sorry, I'm ignorant) how can (A - smooth)
penalize ?
For gauss the eigenvalues are >= 0, many 0, so we're shifting them
negative ??
Or is it a simple sign error, A + smooth ?

> penalization, this is just Ridge or Tychonov with a kernel matrix. A
> is (nobs,nobs) and the number of features is always the same as the
> number of observations that are used. (I was looking at "Kernel Ridge
> Regression" and "Gaussian Process" before I realized that rbf is
> essentially the same, at least for 'gauss')
> I don't know anything about thinplate.
>
> I still don't understand what you mean with "the caller doesn't know
> A".  A is the internally calculated kernel matrix (if I remember
> correctly.)

Yes that's right; how can the caller of Rbf() give a reasonable value
of "smooth"
to solve (A - smoothI) inside Rbf, without knowing A ?  A is wildly
different for gauss, linear ... too.
Or do you just shut your eyes and try 1e-6  ?

Thanks Josef,
cheers
  -- denis
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: scipy.interpolate.rbf sensitive to input noise ?

Robert Kern-2
On Mon, Feb 22, 2010 at 10:14, denis <[hidden email]> wrote:
> On Feb 22, 3:08 pm, [hidden email] wrote:

>> I still don't understand what you mean with "the caller doesn't know
>> A".  A is the internally calculated kernel matrix (if I remember
>> correctly.)
>
> Yes that's right; how can the caller of Rbf() give a reasonable value
> of "smooth"
> to solve (A - smoothI) inside Rbf, without knowing A ?  A is wildly
> different for gauss, linear ... too.

Ah! *That's* what you meant.

> Or do you just shut your eyes and try 1e-6  ?

Basically, yes. You can try different values while watching the
residuals if you want a more rigorous approach. But it really does
work. Try using 'multiquadric' and smooth=0.1.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: scipy.interpolate.rbf sensitive to input noise ?

josef.pktd
In reply to this post by denis-bz-gg
On Mon, Feb 22, 2010 at 11:14 AM, denis <[hidden email]> wrote:

> On Feb 22, 3:08 pm, [hidden email] wrote:
>> On Mon, Feb 22, 2010 at 7:35 AM, denis <[hidden email]> wrote:
>> > On Feb 19, 5:41 pm, [hidden email] wrote:
>> >> On Fri, Feb 19, 2010 at 11:26 AM, denis <[hidden email]> wrote:
>>
>> >> > Use "smooth" ? rbf.py just does
>> >> >    self.A = self._function(r) - eye(self.N)*self.smooth
>> >> > and you don't know A .
>>
>> > That's a line from scipy/interpolate/rbf.py: it solves
>> >    (A - smooth*I)x = b  instead of
>> >    Ax = b
>> > Looks to me like a hack for A singular, plus the caller doesn't know A
>> > anyway.
>
>> It's not a hack it's a requirement, ill-posed inverse problems need
>
> OK, I must be wrong; but (sorry, I'm ignorant) how can (A - smooth)
> penalize ?
> For gauss the eigenvalues are >= 0, many 0, so we're shifting them
> negative ??
> Or is it a simple sign error, A + smooth ?

ouch, standard Ridge is A + smooth * identity_matrix to make it
positive definite.

I don't know why there is a minus. When I checked the eigenvalues, I
found it strange that there were some large *negative* eigenvalues of
A, but I didn't have time to figure this out.
Generically, A - smooth*eye would still make it invertible, although
not positive definite

I haven't looked at the sign convention in rbf, but if you figure out
what's going on, I'm very interested in an answer.

I just briefly ran an example with a negative smooth (-0.5 versus
0.5), rbf with gauss seems better, but multiquadric seems worse. If
smooth is small 1e-6, then there is not much difference.

Even with negative smooth, all except for gauss still have negative
eigenvalues. I have no idea, I only looked at the theory for gaussian
process and don't know how the other ones differ.


>
>> penalization, this is just Ridge or Tychonov with a kernel matrix. A
>> is (nobs,nobs) and the number of features is always the same as the
>> number of observations that are used. (I was looking at "Kernel Ridge
>> Regression" and "Gaussian Process" before I realized that rbf is
>> essentially the same, at least for 'gauss')
>> I don't know anything about thinplate.
>>
>> I still don't understand what you mean with "the caller doesn't know
>> A".  A is the internally calculated kernel matrix (if I remember
>> correctly.)
>
> Yes that's right; how can the caller of Rbf() give a reasonable value
> of "smooth"
> to solve (A - smoothI) inside Rbf, without knowing A ?  A is wildly
> different for gauss, linear ... too.
> Or do you just shut your eyes and try 1e-6  ?

That's the usual problem of bandwidth selection for non-parametric
estimation, visual inspection, cross-validation, plug-in, ... I don't
know what's recommended for rbf.

Cheers,

Josef

>
> Thanks Josef,
> cheers
>  -- denis
> _______________________________________________
> SciPy-User mailing list
> [hidden email]
> http://mail.scipy.org/mailman/listinfo/scipy-user
>
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: scipy.interpolate.rbf sensitive to input noise ?

josef.pktd
On Mon, Feb 22, 2010 at 11:45 AM,  <[hidden email]> wrote:

> On Mon, Feb 22, 2010 at 11:14 AM, denis <[hidden email]> wrote:
>> On Feb 22, 3:08 pm, [hidden email] wrote:
>>> On Mon, Feb 22, 2010 at 7:35 AM, denis <[hidden email]> wrote:
>>> > On Feb 19, 5:41 pm, [hidden email] wrote:
>>> >> On Fri, Feb 19, 2010 at 11:26 AM, denis <[hidden email]> wrote:
>>>
>>> >> > Use "smooth" ? rbf.py just does
>>> >> >    self.A = self._function(r) - eye(self.N)*self.smooth
>>> >> > and you don't know A .
>>>
>>> > That's a line from scipy/interpolate/rbf.py: it solves
>>> >    (A - smooth*I)x = b  instead of
>>> >    Ax = b
>>> > Looks to me like a hack for A singular, plus the caller doesn't know A
>>> > anyway.
>>
>>> It's not a hack it's a requirement, ill-posed inverse problems need
>>
>> OK, I must be wrong; but (sorry, I'm ignorant) how can (A - smooth)
>> penalize ?
>> For gauss the eigenvalues are >= 0, many 0, so we're shifting them
>> negative ??
>> Or is it a simple sign error, A + smooth ?
>
> ouch, standard Ridge is A + smooth * identity_matrix to make it
> positive definite.
>
> I don't know why there is a minus. When I checked the eigenvalues, I
> found it strange that there were some large *negative* eigenvalues of
> A, but I didn't have time to figure this out.
> Generically, A - smooth*eye would still make it invertible, although
> not positive definite
>
> I haven't looked at the sign convention in rbf, but if you figure out
> what's going on, I'm very interested in an answer.
>
> I just briefly ran an example with a negative smooth (-0.5 versus
> 0.5), rbf with gauss seems better, but multiquadric seems worse. If
> smooth is small 1e-6, then there is not much difference.
>
> Even with negative smooth, all except for gauss still have negative
> eigenvalues. I have no idea, I only looked at the theory for gaussian
> process and don't know how the other ones differ.

My intuition for gaussan might not be correct for the other rbfs,
gauss is decreasing in distance, all others are increasing in distance

>>> for func in Rbfuncs: print func, Rbf( x, y, function=func ,smooth=smooth)._function(np.arange(5))

gaussian [ 1.      0.9272  0.739   0.5063  0.2982]
linear [0 1 2 3 4]
thin-plate [  0.       0.       2.7726   9.8875  22.1807]
multiquadric [ 1.      1.0371  1.1413  1.2964  1.4866]
quintic [   0    1   32  243 1024]
cubic [ 0  1  8 27 64]

and the distance matrix itself has negative eigenvalues

Josef

>
>
>>
>>> penalization, this is just Ridge or Tychonov with a kernel matrix. A
>>> is (nobs,nobs) and the number of features is always the same as the
>>> number of observations that are used. (I was looking at "Kernel Ridge
>>> Regression" and "Gaussian Process" before I realized that rbf is
>>> essentially the same, at least for 'gauss')
>>> I don't know anything about thinplate.
>>>
>>> I still don't understand what you mean with "the caller doesn't know
>>> A".  A is the internally calculated kernel matrix (if I remember
>>> correctly.)
>>
>> Yes that's right; how can the caller of Rbf() give a reasonable value
>> of "smooth"
>> to solve (A - smoothI) inside Rbf, without knowing A ?  A is wildly
>> different for gauss, linear ... too.
>> Or do you just shut your eyes and try 1e-6  ?
>
> That's the usual problem of bandwidth selection for non-parametric
> estimation, visual inspection, cross-validation, plug-in, ... I don't
> know what's recommended for rbf.
>
> Cheers,
>
> Josef
>
>>
>> Thanks Josef,
>> cheers
>>  -- denis
>> _______________________________________________
>> SciPy-User mailing list
>> [hidden email]
>> http://mail.scipy.org/mailman/listinfo/scipy-user
>>
>
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: scipy.interpolate.rbf sensitive to input noise ?

denis-bz-gg
Robert, Josef,
  thanks much for taking the time to look at RBF some more.
Summary, correct me:
    A - smooth*I in rbf.py is a sign error (ticket ?)
    for gauss, start with A + 1e-6*I  to move eigenvalues away from 0
    others have pos/neg eigenvalues, don't need smooth.

Looking at Wikipedia Tikhonov (thanks Josef) reminded me
of lstsq_near on advice.mechanicalkern:
minimize |Ax-b| and w|x| together, i.e. Tikhonov with Gammamatrix = I.

But for RBF, why minimize |x| ?  don't we really want to minimize
    |Ax-b|2 + w2 xRx
where R is a roughness penalty ?
On a regular grid Rij ~ Laplacian smoother, not much like I.
Here I'm over my head;
any ideas for a q+d roughness matrix for scattered data ?
(just for fun -- for RBF we've reached the point of diminishing
returns).

cheers
  -- denis
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: scipy.interpolate.rbf sensitive to input noise ?

Robert Kern-2
On Tue, Feb 23, 2010 at 11:43, denis <[hidden email]> wrote:
> Robert, Josef,
>  thanks much for taking the time to look at RBF some more.
> Summary, correct me:
>    A - smooth*I in rbf.py is a sign error (ticket ?)

Not necessarily. It seems to work well in at least some cases. Find a
reference that says otherwise if you want it changed.

>    for gauss, start with A + 1e-6*I  to move eigenvalues away from 0
>    others have pos/neg eigenvalues, don't need smooth.

No. If you need a smoothed approximation to noisy data rather than
exact interpolation than you use the smooth keyword. Otherwise, you
don't.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: scipy.interpolate.rbf sensitive to input noise ?

josef.pktd
On Tue, Feb 23, 2010 at 12:46 PM, Robert Kern <[hidden email]> wrote:
> On Tue, Feb 23, 2010 at 11:43, denis <[hidden email]> wrote:
>> Robert, Josef,
>>  thanks much for taking the time to look at RBF some more.
>> Summary, correct me:
>>    A - smooth*I in rbf.py is a sign error (ticket ?)
>
> Not necessarily. It seems to work well in at least some cases. Find a
> reference that says otherwise if you want it changed.

chapter 2 page 16, for gaussian process. As I said I don't know about
the other methods

http://docs.google.com/viewer?a=v&q=cache:qs8AaAxO6nkJ:www.gaussianprocess.org/gpml/chapters/RW2.pdf+gaussian+process+noise+Ridge&hl=en&gl=ca&pid=bl&srcid=ADGEESj4j8osT6cOIc65r3OaeAtQO_dzgZD4YxSAEkFTeRZajBcROJpJJ9zTlMSrD2OaK1iOJYgy8QqH_Nr0rNxf41faNihCdIzWyVOYxtCFIR7H8mdQZAKFoeaRkFamQlCKhp_s1FOI&sig=

www.gaussianprocess.org/gpml/chapters/RW2.pdf

Josef

>
>>    for gauss, start with A + 1e-6*I  to move eigenvalues away from 0
>>    others have pos/neg eigenvalues, don't need smooth.
>
> No. If you need a smoothed approximation to noisy data rather than
> exact interpolation than you use the smooth keyword. Otherwise, you
> don't.
>
> --
> Robert Kern
>
> "I have come to believe that the whole world is an enigma, a harmless
> enigma that is made terrible by our own mad attempt to interpret it as
> though it had an underlying truth."
>  -- Umberto Eco
> _______________________________________________
> SciPy-User mailing list
> [hidden email]
> http://mail.scipy.org/mailman/listinfo/scipy-user
>
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: scipy.interpolate.rbf sensitive to input noise ?

josef.pktd
On Tue, Feb 23, 2010 at 12:57 PM,  <[hidden email]> wrote:

> On Tue, Feb 23, 2010 at 12:46 PM, Robert Kern <[hidden email]> wrote:
>> On Tue, Feb 23, 2010 at 11:43, denis <[hidden email]> wrote:
>>> Robert, Josef,
>>>  thanks much for taking the time to look at RBF some more.
>>> Summary, correct me:
>>>    A - smooth*I in rbf.py is a sign error (ticket ?)
>>
>> Not necessarily. It seems to work well in at least some cases. Find a
>> reference that says otherwise if you want it changed.
>
> chapter 2 page 16, for gaussian process. As I said I don't know about
> the other methods
>

http://docs.google.com/viewer?a=v&q=cache:qs8AaAxO6nkJ:www.gaussianprocess.org/gpml/chapters/RW2.pdf+gaussian+process+noise+Ridge&hl=en&gl=ca&pid=bl&srcid=ADGEESj4j8osT6cOIc65r3OaeAtQO_dzgZD4YxSAEkFTeRZajBcROJpJJ9zTlMSrD2OaK1iOJYgy8QqH_Nr0rNxf41faNihCdIzWyVOYxtCFIR7H8mdQZAKFoeaRkFamQlCKhp_s1FOI&sig=AHIEtbQK35MLfnZAySw3lF-dR_mNcSaP3w

google links are very short, missed a part

>
> www.gaussianprocess.org/gpml/chapters/RW2.pdf
>
> Josef
>>
>>>    for gauss, start with A + 1e-6*I  to move eigenvalues away from 0
>>>    others have pos/neg eigenvalues, don't need smooth.
>>
>> No. If you need a smoothed approximation to noisy data rather than
>> exact interpolation than you use the smooth keyword. Otherwise, you
>> don't.
>>
>> --
>> Robert Kern
>>
>> "I have come to believe that the whole world is an enigma, a harmless
>> enigma that is made terrible by our own mad attempt to interpret it as
>> though it had an underlying truth."
>>  -- Umberto Eco
>> _______________________________________________
>> SciPy-User mailing list
>> [hidden email]
>> http://mail.scipy.org/mailman/listinfo/scipy-user
>>
>
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: scipy.interpolate.rbf sensitive to input noise ?

Robert Kern-2
On Tue, Feb 23, 2010 at 11:59,  <[hidden email]> wrote:

> On Tue, Feb 23, 2010 at 12:57 PM,  <[hidden email]> wrote:
>> On Tue, Feb 23, 2010 at 12:46 PM, Robert Kern <[hidden email]> wrote:
>>> On Tue, Feb 23, 2010 at 11:43, denis <[hidden email]> wrote:
>>>> Robert, Josef,
>>>>  thanks much for taking the time to look at RBF some more.
>>>> Summary, correct me:
>>>>    A - smooth*I in rbf.py is a sign error (ticket ?)
>>>
>>> Not necessarily. It seems to work well in at least some cases. Find a
>>> reference that says otherwise if you want it changed.
>>
>> chapter 2 page 16, for gaussian process. As I said I don't know about
>> the other methods
>>
>
> http://docs.google.com/viewer?a=v&q=cache:qs8AaAxO6nkJ:www.gaussianprocess.org/gpml/chapters/RW2.pdf+gaussian+process+noise+Ridge&hl=en&gl=ca&pid=bl&srcid=ADGEESj4j8osT6cOIc65r3OaeAtQO_dzgZD4YxSAEkFTeRZajBcROJpJJ9zTlMSrD2OaK1iOJYgy8QqH_Nr0rNxf41faNihCdIzWyVOYxtCFIR7H8mdQZAKFoeaRkFamQlCKhp_s1FOI&sig=AHIEtbQK35MLfnZAySw3lF-dR_mNcSaP3w
>
> google links are very short, missed a part

I've found a couple of things on RBFs specifically that agree.
However, the original source on which our implementation is based does
subtract. He may have a source that uses a negative sign.

http://www.mathworks.co.uk/matlabcentral/fileexchange/10056-scattered-data-interpolation-and-approximation-using-radial-base-functions

Playing around, it seems to me that some of the radial functions
create smoother approximations with large positive values (with the
current implementation) while others create smoother approximations
with large negative values. It's possible that it's just a convention
as to which sign you use.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: scipy.interpolate.rbf sensitive to input noise ?

josef.pktd
On Wed, Mar 3, 2010 at 4:15 PM, Robert Kern <[hidden email]> wrote:

> On Tue, Feb 23, 2010 at 11:59,  <[hidden email]> wrote:
>> On Tue, Feb 23, 2010 at 12:57 PM,  <[hidden email]> wrote:
>>> On Tue, Feb 23, 2010 at 12:46 PM, Robert Kern <[hidden email]> wrote:
>>>> On Tue, Feb 23, 2010 at 11:43, denis <[hidden email]> wrote:
>>>>> Robert, Josef,
>>>>>  thanks much for taking the time to look at RBF some more.
>>>>> Summary, correct me:
>>>>>    A - smooth*I in rbf.py is a sign error (ticket ?)
>>>>
>>>> Not necessarily. It seems to work well in at least some cases. Find a
>>>> reference that says otherwise if you want it changed.
>>>
>>> chapter 2 page 16, for gaussian process. As I said I don't know about
>>> the other methods
>>>
>>
>> http://docs.google.com/viewer?a=v&q=cache:qs8AaAxO6nkJ:www.gaussianprocess.org/gpml/chapters/RW2.pdf+gaussian+process+noise+Ridge&hl=en&gl=ca&pid=bl&srcid=ADGEESj4j8osT6cOIc65r3OaeAtQO_dzgZD4YxSAEkFTeRZajBcROJpJJ9zTlMSrD2OaK1iOJYgy8QqH_Nr0rNxf41faNihCdIzWyVOYxtCFIR7H8mdQZAKFoeaRkFamQlCKhp_s1FOI&sig=AHIEtbQK35MLfnZAySw3lF-dR_mNcSaP3w
>>
>> google links are very short, missed a part
>
> I've found a couple of things on RBFs specifically that agree.
> However, the original source on which our implementation is based does
> subtract. He may have a source that uses a negative sign.
>
> http://www.mathworks.co.uk/matlabcentral/fileexchange/10056-scattered-data-interpolation-and-approximation-using-radial-base-functions
>
> Playing around, it seems to me that some of the radial functions
> create smoother approximations with large positive values (with the
> current implementation) while others create smoother approximations
> with large negative values. It's possible that it's just a convention
> as to which sign you use.

>From the examples I checked, it looks like all other than gaussian
have large positive and negative eigenvalues, so adding or subtracting
doesn't change whether the matrix is definite.

For the gaussian case it is different, with smooth=0 the smallest
eigenvalues are zero and all others are positive. Only by *adding* a
penalization, the matrix becomes positive definite. (I've never seen a
case where eigenvalues are made negative to be able to invert a matrix
in the Ridge regression type of penalization.)

So it might be a convention if the kernel is increasing in distance
but not for the gaussian kernel that is decreasing.

If it's a convention than we could flip the sign and make it
correspond  to the gaussian case, and the other cases would not be
strongly affected.

I will look at the reference. I haven't found much in terms of
references directly for general rbf.

Josef

>
> --
> Robert Kern
>
> "I have come to believe that the whole world is an enigma, a harmless
> enigma that is made terrible by our own mad attempt to interpret it as
> though it had an underlying truth."
>  -- Umberto Eco
> _______________________________________________
> SciPy-User mailing list
> [hidden email]
> http://mail.scipy.org/mailman/listinfo/scipy-user
>
_______________________________________________
SciPy-User mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipy-user
Loading...