
123

On 20121121 00:55, Charles R Harris
wrote:
On Tue, Nov 20, 2012 at 4:26 PM, Virgil
Stokes <[hidden email]>
wrote:
On 20121121 00:11, Charles R Harris wrote:
On Tue, Nov 20, 2012 at 3:59
PM, Virgil Stokes <[hidden email]>
wrote:
On 20121120 23:43, Charles R Harris
wrote:
On Tue, Nov 20,
2012 at 3:03 PM, Virgil Stokes <[hidden email]>
wrote:
On 20121120 22:33, Daπid
wrote:
> The QR descomposition is
finding two matrices with certain
properties such that:
>
> A = Q·R
>
> But, if both Q and R are
multiplied by 1, (Q)·(R) = Q·R
= A, still
> the same matrix. If Q is
orthogonal, Q is also. The sign
is,
> therefore, arbitrary.
>
> On Tue, Nov 20, 2012 at 12:01
AM, Virgil Stokes < [hidden email]>
wrote:
>> I am using the latest
versions of numpy (from
>>
numpy1.7.0b2win32superpackpython2.7.exe)
and scipy (from
>>
scipy0.11.0win32superpackpython2.7.exe
) on a windows 7 (32bit)
>> platform.
>>
>> I have used
>>
>> import numpy as np
>> q,r = np.linalg.qr(A)
>>
>> and compared the results
to what I get from MATLAB (R2010B)
>>
>> [q,r] = qr(A)
>>
>> The q,r returned from
numpy are both the negative of the
q,r returned
>> from MATLAB for the same
matrix A. I believe that theq,r
returned from
>> MATLAB are correct. Why
am I getting their negative from
numpy?
>>
>> Note, I have tried this
on several different matrices 
numpy always
>> gives the negative of
MATLAB's.
>>
>>
_______________________________________________
>> SciPyUser mailing list
>> [hidden email]
>> http://mail.scipy.org/mailman/listinfo/scipyuser
>
_______________________________________________
> SciPyUser mailing list
> [hidden email]
> http://mail.scipy.org/mailman/listinfo/scipyuser
Thanks David,
I am well aware of this; but, I am
using the QR decomposition for a
convariance (PD matrix) and the
negative R is not very useful in this
case and the numpy result, IMHO should
not be the default.
What is your application?
My application is the propagation of the
factorized R matrix in the Kalman filter, where
the QR factorization is for the covariance
matrix in the KF recursions.
That is what I suspected. However, the factorized
matrices are usually U^t*D*U or U^t * U, so I
think you are doing something wrong.
No Chuck,
You are referring to Bierman's factorization which is just
one of the factorizations possible. I am using a standard
and welldocumented form of the socalled "squareroot"
Kalman filters (just Google on this and be enlightened).
Again, there many papers/books that discuss the QR
factorization implementation for both the Kalman filter and
Kalman smoother.
Yes I am familiar with square root Kalman filters, I've even
written a few.
Chuck
_______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser
Sorry,
It is now 01:49 in Stockholm and I need to get a few hours of sleep.
I will try to respond to further comments/questions on this topic
tomorrow morning.
Thanks for looking at this problem. I believe that this is a serious
issue that does deserve some attention.
Best, V
_______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser


In reply to this post by Alejandro Weinstein3
On 20121121 01:48, Alejandro Weinstein wrote:
> On Tue, Nov 20, 2012 at 5:36 PM, Virgil Stokes < [hidden email]> wrote:
>> Using np.linalg.qr(A) I get the following for R (3x3) which is
>> "squareroot" of the covariance matrix:
>>
>> array([[ 1.00124922e+03, 4.99289918e+00, 0.00000000e+00],
>> [ 0.00000000e+00, 1.00033071e+02, 5.62045938e04],
>> [ 0.00000000e+00, 0.00000000e+00, 9.98419272e03]])
>>
>> which is clearly not PD, since the it's 3 eigenvalues (diagonal
>> elements) are all negative.
> But why you expect R to be PD?
Because R*R^T = P (a covariance matrix). One important reason for
using the QR factorization in the KF is to ensure that R is always PD
during the recursions.
> The QR decomposition [1] is
>
> A = QR with Q^T Q = I and R upper diagonal.
>
> [1] http://en.wikipedia.org/wiki/QR_factorization> _______________________________________________
> SciPyUser mailing list
> [hidden email]
> http://mail.scipy.org/mailman/listinfo/scipyuser_______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser


On Tue, Nov 20, 2012 at 7:36 PM, Virgil Stokes < [hidden email]> wrote:
> On 20121121 00:29, Robert Kern wrote:
>> On Tue, Nov 20, 2012 at 11:21 PM, Virgil Stokes < [hidden email]> wrote:
>>> On 20121120 23:59, Robert Kern wrote:
>>>> On Tue, Nov 20, 2012 at 10:49 PM, Virgil Stokes < [hidden email]> wrote:
>>>>
>>>>> Ok Skipper,
>>>>> Unfortunately, things are worse than I had hoped, numpy sometimes
>>>>> returns the negative of the q,r and other times the same as MATLAB!
>>>>> Thus, as someone has already mentioned in this discussion, the "sign"
>>>>> seems to depend on the matrix being decomposed. This could be a
>>>>> nightmare to track down.
>>>>>
>>>>> I hope that I can return to some older versions of numpy/scipy to work
>>>>> around this problem until this problem is fixed. Any suggestions on how
>>>>> to recover earlier versions would be appreciated.
>>>> That's not going to help you. The only thing that we guarantee (or
>>>> have *ever* guaranteed) is that the result is a valid QR
>>>> decomposition. If you need to swap signs to normalize things to your
>>>> desired convention, you will need to do that as a postprocessing step.
>>> But why do I need to normalize with numpy (at least with latest
>>> release); but not with MATLAB.
>> Because MATLAB decided to do the normalization step for you. That's a
>> valid decision. And so is ours.
>>
>>> A simple question for you.
>>>
>>> In my application MATLAB generates a sequence of QR factorizations for
>>> covariance matrices in which R is always PD  which is corect!
>> That is not part of the definition of a QR decomposition. Failing to
>> meet that property does not make the QR decomposition incorrect.
>>
>> The only thing that is incorrect is passing an arbitrary, but valid,
>> QR decomposition to something that is expecting a strict *subset* of
>> valid QR decompositions.
> Sorry but I do not understand this...
> Let me give you an example that I believe illustrates the problem in numpy
>
> I have the following matrix, A:
>
> array([[ 7.07106781e+02, 5.49702852e04, 1.66675481e19],
> [ 3.53553391e+01, 7.07104659e+01, 1.66675481e19],
> [ 0.00000000e+00, 3.97555166e+00, 7.07106781e03],
> [ 7.07106781e+02, 6.48214647e04, 1.66675481e19],
> [ 3.53553391e+01, 7.07104226e+01, 1.66675481e19],
> [ 0.00000000e+00, 3.97560687e+00, 7.07106781e03],
> [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
> [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
> [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00]])
>
> Note, this is clearly not a covariance matrix, but, it does contain a
> covariance matrix (3x3). I refer you to the paper for how this matrix
> was generated.
>
> Using np.linalg.qr(A) I get the following for R (3x3) which is
> "squareroot" of the covariance matrix:
>
> array([[ 1.00124922e+03, 4.99289918e+00, 0.00000000e+00],
> [ 0.00000000e+00, 1.00033071e+02, 5.62045938e04],
> [ 0.00000000e+00, 0.00000000e+00, 9.98419272e03]])
>
> which is clearly not PD, since the it's 3 eigenvalues (diagonal
> elements) are all negative.
>
> Now, if I use qr(A,0) in MATLAB:
>
> I get the following for R (3x3)
>
> 1001.24922, 4.99290, 0.00000
> 0.00000, 100.03307, 0.00056
> 0.00000, 0.00000, 0.00998
>
> This is obviously PD, as it should be, and gives the correct results.
> Note, it is the negative of the R obtained with numpy.
>
> I can provide other examples in which both R's obtained are the same and
> they both lead to correct results. That is, when the R's are different,
> the R obtained with MATLAB is always PD and always gives the correct end
> result, while the R with numpy is not PD and does not give the correct
> end result.
>
> I hope that this helps you to understand my problem better. If there are
> more details that you need then let me know what, please.
To recap. No one is disputing that MATLAB does a normalization that
may make *some* use cases of a QR decomposition easier. However,
*both* are correct answers. Nothing in the definition of a QR
decomposition says R has to be positive definite. If you want R
positive definite, then, as mentioned, you'll need to check this
yourself and/or make a patch to numpy's QR function to do this to make
sure you get a PD R. I happen to get a PD R in your example using
LAPACK 3.3.1. I believe you can ensure R positive definite like so
Q,R = np.linalg.qr(A)
trace_r = np.trace(R)
Q *= trace_r > 0 or 1
R *= trace_r > 0 or 1
Skipper
Skipper
_______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser


On 11/20/12 6:49 PM, Virgil Stokes wrote:
> Thanks for looking at this problem. I believe that this is a serious
> issue that does deserve some attention.
I'm curious: are you *guaranteed* by matlab (e.g., it's documented) to
get a PD R matrix, or does it just happen in the cases you checked? If
you aren't guaranteed it, then it seems very shaky to depend on it even
from matlab. Of course, if it doesn't mention it in the docs, then who
knows if it is guaranteedmatlab is closed source, so you can't just
go check the code yourself.
The way I read the docs at http://www.mathworks.com/help/matlab/ref/qr.html:"[Q,R] = qr(A), where A is mbyn, produces an mbyn upper triangular
matrix R and an mbym unitary matrix Q so that A = Q*R."
it sounds like you shouldn't depend on R being PD even from matlab, even
if it's worked out that way in a few of your cases.
Thanks,
Jason
_______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser


On Wednesday 21 Nov 2012 01:36:08 Virgil Stokes wrote:
>
> I have the following matrix, A:
>
> array([[ 7.07106781e+02, 5.49702852e04, 1.66675481e19],
> [ 3.53553391e+01, 7.07104659e+01, 1.66675481e19],
> [ 0.00000000e+00, 3.97555166e+00, 7.07106781e03],
> [ 7.07106781e+02, 6.48214647e04, 1.66675481e19],
> [ 3.53553391e+01, 7.07104226e+01, 1.66675481e19],
> [ 0.00000000e+00, 3.97560687e+00, 7.07106781e03],
> [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
> [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
> [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00]])
>
>
> Using np.linalg.qr(A) I get the following for R (3x3) which is
> "squareroot" of the covariance matrix:
>
> array([[ 1.00124922e+03, 4.99289918e+00, 0.00000000e+00],
> [ 0.00000000e+00, 1.00033071e+02, 5.62045938e04],
> [ 0.00000000e+00, 0.00000000e+00, 9.98419272e03]])
>
I thought this would be down to the Lapack library you have linked against, as numpy uses Lapack's [zd]gqrf internally. I know Matlab comes distributed with Intel's MKL, so I imagine Matlab would use Intel's MKL routines internally.
Without knowing anything about the math, here are some test cases on Linux and Mac OS X. Looks like it's dependent on platform, from these results...
>>> a = np.array( ... )
>>> q, r = np.linalg.qr( a )
>>> r
**Linux**
Intel MKL:
array([[ 1.00124922e+03, 4.99289919e+00, 2.40741243e35],
[ 0.00000000e+00, 1.00033071e+02, 5.62045938e04],
[ 0.00000000e+00, 0.00000000e+00, 9.98419272e03]])
ATLAS Lapack:
array([[ 1.00124922e+03, 4.99289919e+00, 2.40741243e35],
[ 0.00000000e+00, 1.00033071e+02, 5.62045938e04],
[ 0.00000000e+00, 0.00000000e+00, 9.98419272e03]])
**Mac OS X**:
Accelerate Framework:
array([[ 1.00124922e+03, 4.99289919e+00, 2.40741243e35],
[ 0.00000000e+00, 1.00033071e+02, 5.62045938e04],
[ 0.00000000e+00, 0.00000000e+00, 9.98419272e03]])
ATLAS Lapack:
array([[ 1.00124922e+03, 4.99289919e+00, 2.40741243e35],
[ 0.00000000e+00, 1.00033071e+02, 5.62045938e04],
[ 0.00000000e+00, 0.00000000e+00, 9.98419272e03]])
Cheers,
Alex _______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser


On Tue, Nov 20, 2012 at 5:56 PM, Virgil Stokes < [hidden email]> wrote:
> On 20121121 01:48, Alejandro Weinstein wrote:
>> On Tue, Nov 20, 2012 at 5:36 PM, Virgil Stokes < [hidden email]> wrote:
>>> which is clearly not PD, since the it's 3 eigenvalues (diagonal
>>> elements) are all negative.
>> But why you expect R to be PD?
> Because R*R^T = P (a covariance matrix). One important reason for
> using the QR factorization in the KF is to ensure that R is always PD
> during the recursions.
As you said, P = R * R^T, which is PD, even if R is not. Please check
the definition of QR decomposition: R is _not_ required to be PD.
Looking at the paper, they in fact use P = R * R ^ T, as in eq. (1).
They never use R alone. So the fact that R is not PD should not be an
issue. Can you show your code? I'm curious to see how the fact that R
is not PD makes a difference.
Alejandro.
_______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser


On 21Nov2012 02:17, Skipper Seabold wrote:
> On Tue, Nov 20, 2012 at 7:36 PM, Virgil Stokes < [hidden email]> wrote:
>> On 20121121 00:29, Robert Kern wrote:
>>> On Tue, Nov 20, 2012 at 11:21 PM, Virgil Stokes < [hidden email]> wrote:
>>>> On 20121120 23:59, Robert Kern wrote:
>>>>> On Tue, Nov 20, 2012 at 10:49 PM, Virgil Stokes < [hidden email]> wrote:
>>>>>
>>>>>> Ok Skipper,
>>>>>> Unfortunately, things are worse than I had hoped, numpy sometimes
>>>>>> returns the negative of the q,r and other times the same as MATLAB!
>>>>>> Thus, as someone has already mentioned in this discussion, the "sign"
>>>>>> seems to depend on the matrix being decomposed. This could be a
>>>>>> nightmare to track down.
>>>>>>
>>>>>> I hope that I can return to some older versions of numpy/scipy to work
>>>>>> around this problem until this problem is fixed. Any suggestions on how
>>>>>> to recover earlier versions would be appreciated.
>>>>> That's not going to help you. The only thing that we guarantee (or
>>>>> have *ever* guaranteed) is that the result is a valid QR
>>>>> decomposition. If you need to swap signs to normalize things to your
>>>>> desired convention, you will need to do that as a postprocessing step.
>>>> But why do I need to normalize with numpy (at least with latest
>>>> release); but not with MATLAB.
>>> Because MATLAB decided to do the normalization step for you. That's a
>>> valid decision. And so is ours.
>>>
>>>> A simple question for you.
>>>>
>>>> In my application MATLAB generates a sequence of QR factorizations for
>>>> covariance matrices in which R is always PD  which is corect!
>>> That is not part of the definition of a QR decomposition. Failing to
>>> meet that property does not make the QR decomposition incorrect.
>>>
>>> The only thing that is incorrect is passing an arbitrary, but valid,
>>> QR decomposition to something that is expecting a strict *subset* of
>>> valid QR decompositions.
>> Sorry but I do not understand this...
>> Let me give you an example that I believe illustrates the problem in numpy
>>
>> I have the following matrix, A:
>>
>> array([[ 7.07106781e+02, 5.49702852e04, 1.66675481e19],
>> [ 3.53553391e+01, 7.07104659e+01, 1.66675481e19],
>> [ 0.00000000e+00, 3.97555166e+00, 7.07106781e03],
>> [ 7.07106781e+02, 6.48214647e04, 1.66675481e19],
>> [ 3.53553391e+01, 7.07104226e+01, 1.66675481e19],
>> [ 0.00000000e+00, 3.97560687e+00, 7.07106781e03],
>> [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
>> [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
>> [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00]])
>>
>> Note, this is clearly not a covariance matrix, but, it does contain a
>> covariance matrix (3x3). I refer you to the paper for how this matrix
>> was generated.
>>
>> Using np.linalg.qr(A) I get the following for R (3x3) which is
>> "squareroot" of the covariance matrix:
>>
>> array([[ 1.00124922e+03, 4.99289918e+00, 0.00000000e+00],
>> [ 0.00000000e+00, 1.00033071e+02, 5.62045938e04],
>> [ 0.00000000e+00, 0.00000000e+00, 9.98419272e03]])
>>
>> which is clearly not PD, since the it's 3 eigenvalues (diagonal
>> elements) are all negative.
>>
>> Now, if I use qr(A,0) in MATLAB:
>>
>> I get the following for R (3x3)
>>
>> 1001.24922, 4.99290, 0.00000
>> 0.00000, 100.03307, 0.00056
>> 0.00000, 0.00000, 0.00998
>>
>> This is obviously PD, as it should be, and gives the correct results.
>> Note, it is the negative of the R obtained with numpy.
>>
>> I can provide other examples in which both R's obtained are the same and
>> they both lead to correct results. That is, when the R's are different,
>> the R obtained with MATLAB is always PD and always gives the correct end
>> result, while the R with numpy is not PD and does not give the correct
>> end result.
>>
>> I hope that this helps you to understand my problem better. If there are
>> more details that you need then let me know what, please.
> To recap. No one is disputing that MATLAB does a normalization that
> may make *some* use cases of a QR decomposition easier. However,
> *both* are correct answers. Nothing in the definition of a QR
> decomposition says R has to be positive definite. If you want R
> positive definite, then, as mentioned, you'll need to check this
> yourself and/or make a patch to numpy's QR function to do this to make
> sure you get a PD R. I happen to get a PD R in your example using
> LAPACK 3.3.1. I believe you can ensure R positive definite like so
>
> Q,R = np.linalg.qr(A)
> trace_r = np.trace(R)
> Q *= trace_r > 0 or 1
> R *= trace_r > 0 or 1
Yes, I agree with you. And your method works fine. In my particular case (with
my current installation), it seems just checking R[0,0] > 0 can also be used.
How can I force the use of LAPACK 3.3.1?
_______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser


On 21Nov2012 02:29, Jason Grout wrote:
> On 11/20/12 6:49 PM, Virgil Stokes wrote:
>
>> Thanks for looking at this problem. I believe that this is a serious
>> issue that does deserve some attention.
> I'm curious: are you *guaranteed* by matlab (e.g., it's documented) to
> get a PD R matrix, or does it just happen in the cases you checked? If
> you aren't guaranteed it, then it seems very shaky to depend on it even
> from matlab. Of course, if it doesn't mention it in the docs, then who
> knows if it is guaranteedmatlab is closed source, so you can't just
> go check the code yourself.
>
> The way I read the docs at http://www.mathworks.com/help/matlab/ref/qr.html:>
> "[Q,R] = qr(A), where A is mbyn, produces an mbyn upper triangular
> matrix R and an mbym unitary matrix Q so that A = Q*R."
>
> it sounds like you shouldn't depend on R being PD even from matlab, even
> if it's worked out that way in a few of your cases.
Of course you are correct Jason, and I have not checked every QR decomposition
in MATLAB and numpy. However, for my particular application I do know that the R
returned from MATLAB always has diagonal elements > 0, while this is not the
case for my installation of numpy.
>
> Thanks,
>
> Jason
>
> _______________________________________________
> SciPyUser mailing list
> [hidden email]
> http://mail.scipy.org/mailman/listinfo/scipyuser_______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser


In reply to this post by Bugzilla from beamesleach@gmail.com
On 21Nov2012 02:30, Alex Leach wrote:
On Wednesday 21 Nov 2012 01:36:08 Virgil
Stokes wrote:
>
> I have the following matrix, A:
>
> array([[ 7.07106781e+02, 5.49702852e04,
1.66675481e19],
> [ 3.53553391e+01, 7.07104659e+01,
1.66675481e19],
> [ 0.00000000e+00, 3.97555166e+00,
7.07106781e03],
> [ 7.07106781e+02, 6.48214647e04,
1.66675481e19],
> [ 3.53553391e+01, 7.07104226e+01,
1.66675481e19],
> [ 0.00000000e+00, 3.97560687e+00,
7.07106781e03],
> [ 0.00000000e+00, 0.00000000e+00,
0.00000000e+00],
> [ 0.00000000e+00, 0.00000000e+00,
0.00000000e+00],
> [ 0.00000000e+00, 0.00000000e+00,
0.00000000e+00]])
>
>
> Using np.linalg.qr(A) I get the
following for R (3x3) which is
> "squareroot" of the covariance matrix:
>
> array([[ 1.00124922e+03,
4.99289918e+00, 0.00000000e+00],
> [ 0.00000000e+00, 1.00033071e+02,
5.62045938e04],
> [ 0.00000000e+00, 0.00000000e+00,
9.98419272e03]])
>
I thought this would be down to the Lapack
library you have linked against, as numpy uses Lapack's [zd]gqrf
internally. I know Matlab comes distributed with Intel's MKL, so
I imagine Matlab would use Intel's MKL routines internally.
Without knowing anything about the math, here
are some test cases on Linux and Mac OS X. Looks like it's
dependent on platform, from these results...
>>> a = np.array( ... )
>>> q, r = np.linalg.qr( a )
>>> r
**Linux**
Intel MKL:
array([[ 1.00124922e+03, 4.99289919e+00,
2.40741243e35],
[ 0.00000000e+00, 1.00033071e+02,
5.62045938e04],
[ 0.00000000e+00, 0.00000000e+00,
9.98419272e03]])
ATLAS Lapack:
array([[ 1.00124922e+03, 4.99289919e+00,
2.40741243e35],
[ 0.00000000e+00, 1.00033071e+02,
5.62045938e04],
[ 0.00000000e+00, 0.00000000e+00,
9.98419272e03]])
**Mac OS X**:
Accelerate Framework:
array([[ 1.00124922e+03, 4.99289919e+00,
2.40741243e35],
[ 0.00000000e+00, 1.00033071e+02,
5.62045938e04],
[ 0.00000000e+00, 0.00000000e+00,
9.98419272e03]])
ATLAS Lapack:
array([[ 1.00124922e+03, 4.99289919e+00,
2.40741243e35],
[ 0.00000000e+00, 1.00033071e+02,
5.62045938e04],
[ 0.00000000e+00, 0.00000000e+00,
9.98419272e03]])
Cheers,
Alex
_______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser
Very interesting Alex  Guess I should get a Mac :)
How can I force the use of the Lapack (on my Win32 system) that is
being used by the Mac OS X?
_______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser


Shor answer: you shouldn't! This time, it will work, next time it won't. You must never depend on an implementation detail as it is the case here. Add a post processing step, and it will be allright on all platforms and also in all languages.
Cheers,
_______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser


In reply to this post by Alejandro Weinstein3
On 21Nov2012 03:47, Alejandro
Weinstein wrote:
On Tue, Nov 20, 2012 at 5:56 PM, Virgil Stokes [hidden email] wrote:
On 20121121 01:48, Alejandro Weinstein wrote:
On Tue, Nov 20, 2012 at 5:36 PM, Virgil Stokes [hidden email] wrote:
which is clearly not PD, since the it's 3 eigenvalues (diagonal
elements) are all negative.
But why you expect R to be PD?
Because R*R^T = P (a covariance matrix). One important reason for
using the QR factorization in the KF is to ensure that R is always PD
during the recursions.
As you said, P = R * R^T, which is PD, even if R is not. Please check
the definition of QR decomposition: R is _not_ required to be PD.
Yes, again Alejandro I know this  my statement was really not
correct (sloppy on my part).
Looking at the paper, they in fact use P = R * R ^ T, as in eq. (1).
They never use R alone. So the fact that R is not PD should not be an
issue.
But, again, it is an issue for the algorithm given in Table 3 (p.
2248 of paper). Look at step 8. and equation (30). As stated in
this step "The squareroot of the filtered stateerror covariance"
is returned as from
step 5. where a QR decomposition is performed for the
triangularization. The
matrix must have diagonal elements > 0 (I leave this to you to
think about). When I perform step 5. with MATLAB, I always
get a that
has diagonal elements > 0 for my application (which is
satisfying). When I perform step 5. with numpy, for the same matrix
on the RHS of (27) I do not always get diagonal elements
> 0 for 
herein lies the problem that I have been trying to explain.
Can you show your code? I'm curious to see how the fact that R
is not PD makes a difference.
You should be able to write a few lines of code to test this
yourself. If this problem does not occur on your system then please
tell me what version of Scipy/numpy you have installed and on what
platform.
_______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser


On Wed, Nov 21, 2012 at 9:34 AM, Virgil Stokes <[hidden email]> wrote:
When I perform step 5. with MATLAB, I always
get a that
has diagonal elements > 0 for my application (which is
satisfying). When I perform step 5. with numpy, for the same matrix
on the RHS of (27) I do not always get diagonal elements
> 0 for 
herein lies the problem that I have been trying to explain. http://www.mathworks.es/es/help/matlab/ref/qr.htmlThere is no reference to the positiveness of r, nor it is required by the definition of QR decomposition, therefore it is wrong to assume it is true. It may be in your current examples, but you don't have guaranteed that you will find another A matrix where this will not be true. Also, the portability of your code is tricky, as another person, with a different version of MATLAB could have different results, even if it is not documented (because the change would not affect the API).
The same holds true for Mathematica. http://reference.wolfram.com/mathematica/ref/QRDecomposition.htmlAnd even, in one of the examples, the matrix r is negative.
If you want to make sure, just use a qr function that checks for the sign and reverts it, if necesary.
_______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser


But, again, it is an issue for the algorithm given in Table 3 (p.
2248 of paper). Look at step 8. and equation (30). As stated in
this step "The squareroot of the filtered stateerror covariance"
is returned as <tblatex1.png> from
step 5. where a QR decomposition is performed for the
triangularization. The <tblatex2.png>
matrix must have diagonal elements > 0 (I leave this to you to
think about). When I perform step 5. with MATLAB, I always
get a <tblatex1.png> that
has diagonal elements > 0 for my application (which is
satisfying). When I perform step 5. with numpy, for the same matrix
on the RHS of (27) I do not always get diagonal elements
> 0 for <tblatex1.png> 
herein lies the problem that I have been trying to explain.
There is nothing in the definition of QR that says R must be Positive definite.
NumPy will give you a valid QR fatorization computed with LAPACK, and so will Matlab. You might blame it on the LAPACK version, or Matlab might do an undocumented normalization as postprocessing.
Matlab is not a standard for the "right ansver" in linear algebra.
Any numerical code requiring R to be PD is errorneous, even in Matlab.
Then again, it is not rocket science to postprocess Q and R so that R is PD.
If you need to decompose a covariance matrix P = R*R' with R PD it sounds like you use QR as a Cholesky factorization. If you want a Cholesky factor of P, you know where to find it. (Not that I recommend using it.)
Sturla
_______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser


On Wed, Nov 21, 2012 at 1:34 AM, Virgil Stokes <[hidden email]> wrote:
But, again, it is an issue for the algorithm given in Table 3 (p.
2248 of paper). Look at step 8. and equation (30).
I see no Table 3, nor page 2248 in the paper you attached. Which paper are you talking about?
As stated in
this step "The squareroot of the filtered stateerror covariance"
is returned as from
step 5. where a QR decomposition is performed for the
triangularization. The
matrix must have diagonal elements > 0 (I leave this to you to
think about). When I perform step 5. with MATLAB, I always
get a that
has diagonal elements > 0 for my application (which is
satisfying). When I perform step 5. with numpy, for the same matrix
on the RHS of (27) I do not always get diagonal elements
> 0 for 
herein lies the problem that I have been trying to explain.
Anyway, I would say that an algorithm that depend on the sign of the entries of R is not correct.
Alejandro.
_______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser


On 21.11.2012 14:22, Alejandro Weinstein wrote:
> On Wed, Nov 21, 2012 at 1:34 AM, Virgil Stokes < [hidden email]
> <mailto: [hidden email]>> wrote:
>
> But, again, it is an issue for the algorithm given in Table 3 (p.
> 2248 of paper). Look at step 8. and equation (30).
>
>
> I see no Table 3, nor page 2248 in the paper you attached. Which paper
> are you talking about?
I think what he means is that in equation 22 (page 2591) S must be PD,
and thus R = S' must be PD as well. Which (AFAIK) must be an error in
the paper. Too bad referees didn't catch it.
They should have used Cholesky of P as S. So instead of the second
equation in section IV which states that S = R', equation (6) should
Cholesky factorize P to obtain S. And thus S would be PD for equation
22. In conclusion, there is an error in the paper which was silenced by
Matlab's QR.
Sturla
_______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser


In reply to this post by Alejandro Weinstein3
On 21Nov2012 14:22, Alejandro
Weinstein wrote:
On Wed, Nov 21, 2012 at 1:34 AM, Virgil Stokes <[hidden email]>
wrote:
But, again, it is an issue for the algorithm given in
Table 3 (p. 2248 of paper). Look at step 8. and
equation (30).
I see no Table 3, nor page 2248 in the paper you attached.
Which paper are you talking about?
As stated in this step "The squareroot of the
filtered stateerror covariance" is returned as from
step 5. where a QR decomposition is performed for the
triangularization. The
matrix must have diagonal elements > 0 (I leave this
to you to think about). When I perform step 5. with
MATLAB, I always get a that
has diagonal elements > 0 for my application (which
is satisfying). When I perform step 5. with numpy, for
the same matrix on the RHS of (27) I do not always
get diagonal elements > 0 for 
herein lies the problem that I have been trying to
explain.
Anyway, I would say that an algorithm that depend on the
sign of the entries of R is not correct.
Alejandro.
_______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser
I am referring to the following paper:
Arasaratnam, I. and Haykin, S. (2011) Cubature Kalman Smoothers. Automatica
47, pp. 22452250.
Sorry about the confusion
_______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser


On 21Nov2012 15:15, Sturla Molden
wrote:
On 21.11.2012 14:22, Alejandro Weinstein wrote:
On Wed, Nov 21, 2012 at 1:34 AM, Virgil Stokes <[hidden email]
[hidden email]> wrote:
But, again, it is an issue for the algorithm given in Table 3 (p.
2248 of paper). Look at step 8. and equation (30).
I see no Table 3, nor page 2248 in the paper you attached. Which paper
are you talking about?
I think what he means is that in equation 22 (page 2591) S must be PD,
and thus R = S' must be PD as well. Which (AFAIK) must be an error in
the paper. Too bad referees didn't catch it.
They should have used Cholesky of P as S. So instead of the second
equation in section IV which states that S = R', equation (6) should
Cholesky factorize P to obtain S. And thus S would be PD for equation
22. In conclusion, there is an error in the paper which was silenced by
Matlab's QR.
Sturla
_______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser
I am referring to the following paper:
Arasaratnam, I. and Haykin, S. (2011) Cubature Kalman Smoothers. Automatica
47, pp. 22452250.
Sorry about the confusion :[
_______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser


On 21.11.2012 16:01, Virgil Stokes wrote:
> I am referring to the following paper:
> Arasaratnam, I. and Haykin, S. (2011) Cubature Kalman Smoothers.
> /Automatica/ *47*, pp. 22452250.
Not the one you attached, and sorry I don't have it.
Sturla
_______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser


On 21.11.2012 16:14, Sturla Molden wrote:
> On 21.11.2012 16:01, Virgil Stokes wrote:
>
>> I am referring to the following paper:
>> Arasaratnam, I. and Haykin, S. (2011) Cubature Kalman Smoothers.
>> /Automatica/ *47*, pp. 22452250.
>
> Not the one you attached, and sorry I don't have it.
Ok, found it.
It basically has the same error as I pointed out.
It needs a PD lower triangular matrix, which is not returned by QR but
Cholesky.
Sturla
_______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser


Virgil Stokes <vs <at> it.uu.se> writes:
[clip]
> Now, if I use qr(A,0) in MATLAB:
>
> I get the following for R (3x3)
>
> 1001.24922, 4.99290, 0.00000
> 0.00000, 100.03307, 0.00056
> 0.00000, 0.00000, 0.00998
>
> This is obviously PD, as it should be, and gives the correct results.
> Note, it is the negative of the R obtained with numpy.
Case in point:
MATLAB R2011b, Linux x86_64
>> A=[707.106781, 0.000549702852, 1.66675481e19;35.3553391, 70.7104659,
1.66675481e19;0.0, 3.97555166, 0.00707106781;707.106781, 0.000648214647,
1.66675481e19;35.3553391, 70.7104226, 1.66675481e19;0.0, 3.97560687,
0.00707106781;0.0, 0.0, 0.0;0.0, 0.0, 0.0;0.0, 0.0, 0.0]
>> [Q,R] = qr(A, 0);
>> R
R =
1.0e+03 *
1.001249219464423 0.004992899186307 0.000000000000000
0 0.100033070889029 0.000000562045938
0 0 0.000009984192722
The statements
"MATLAB gives always positive definite R"
"MATLAB does postprocessing in QR"
are both false. Indeed, any "normalization" of the result is
not mentioned in the documentation.
You simply cannot assume that the returned R is PD, because
what you actually get ultimately depends on the underlying
LAPACK library.

Pauli Virtanen
_______________________________________________
SciPyUser mailing list
[hidden email]
http://mail.scipy.org/mailman/listinfo/scipyuser

123
