# qr decompostion gives negative q, r ? Classic List Threaded 46 messages 123
Open this post in threaded view
|

## Re: qr decompostion gives negative q, r ?

 On 2012-11-21 00:55, Charles R Harris wrote: On Tue, Nov 20, 2012 at 4:26 PM, Virgil Stokes wrote: On 2012-11-21 00:11, Charles R Harris wrote: On Tue, Nov 20, 2012 at 3:59 PM, Virgil Stokes wrote: On 2012-11-20 23:43, Charles R Harris wrote: On Tue, Nov 20, 2012 at 3:03 PM, Virgil Stokes wrote: On 2012-11-20 22:33, Daπid wrote: > The QR descomposition is finding two matrices with certain properties such that: > > A = Q·R > > But, if both Q and R are multiplied by -1, (-Q)·(-R) = Q·R = A, still > the same matrix. If Q is orthogonal, -Q is also. The sign is, > therefore, arbitrary. > > On Tue, Nov 20, 2012 at 12:01 AM, Virgil Stokes <[hidden email]> wrote: >> I am using the latest versions of numpy (from >> numpy-1.7.0b2-win32-superpack-python2.7.exe) and scipy (from >> scipy-0.11.0-win32-superpack-python2.7.exe ) on a windows 7 (32-bit) >> platform. >> >> I have used >> >> import numpy as np >> q,r = np.linalg.qr(A) >> >> and compared the results to what I get from MATLAB (R2010B) >> >> [q,r] = qr(A) >> >> The q,r returned from numpy are both the negative of the q,r returned >> from MATLAB for the same matrix A. I believe that theq,r returned from >> MATLAB are correct. Why am I getting their negative from numpy? >> >> Note, I have tried this on several different matrices ---  numpy always >> gives the negative of MATLAB's. >> >> _______________________________________________ >> SciPy-User mailing list >> [hidden email] >> http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-User mailing list > [hidden email] > http://mail.scipy.org/mailman/listinfo/scipy-user Thanks David, I am well aware of this; but, I am using the QR decomposition for a convariance (PD matrix) and the negative R is not very useful in this case and the numpy result, IMHO should not be the default. What is your application? My application is the propagation of the factorized R matrix in the Kalman filter, where the QR factorization is for the covariance matrix in the KF recursions. That is what I suspected. However, the factorized matrices are usually U^t*D*U or U^t * U, so I think you are doing something wrong. No Chuck, You are referring to Bierman's factorization which is just one of the factorizations possible. I am using a standard and well-documented form of the so-called "square-root" Kalman filters (just Google on this and be enlightened). Again, there many papers/books that discuss the QR factorization implementation for both the Kalman filter and Kalman smoother. Yes I am familiar with square root Kalman filters, I've even written a few. Chuck ```_______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user ``` Sorry, It is now 01:49 in Stockholm and I need to get a few hours of sleep. I will try to respond to further comments/questions on this topic tomorrow morning. Thanks for looking at this problem. I believe that this is a serious issue that does deserve some attention. Best, V _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: qr decompostion gives negative q, r ?

 In reply to this post by Alejandro Weinstein-3 On 2012-11-21 01:48, Alejandro Weinstein wrote: > On Tue, Nov 20, 2012 at 5:36 PM, Virgil Stokes <[hidden email]> wrote: >> Using np.linalg.qr(A) I get the following for R (3x3) which is >> "square-root" of the covariance matrix: >> >> array([[ -1.00124922e+03,   4.99289918e+00,   0.00000000e+00], >>          [  0.00000000e+00,  -1.00033071e+02,   5.62045938e-04], >>          [  0.00000000e+00,   0.00000000e+00,  -9.98419272e-03]]) >> >> which is clearly not PD, since the it's 3 eigenvalues (diagonal >> elements) are all negative. > But why you expect R to be PD? Because    R*R^T = P (a covariance matrix). One important reason for using the QR factorization in the KF is to ensure that R is always PD during the recursions. >   The QR decomposition  is > > A = QR with Q^T Q = I and R upper diagonal. > >  http://en.wikipedia.org/wiki/QR_factorization> _______________________________________________ > SciPy-User mailing list > [hidden email] > http://mail.scipy.org/mailman/listinfo/scipy-user_______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: qr decompostion gives negative q, r ?

Open this post in threaded view
|

## Re: qr decompostion gives negative q, r ?

 In reply to this post by Virgil Stokes On 11/20/12 6:49 PM, Virgil Stokes wrote: > Thanks for looking at this problem. I believe that this is a serious > issue that does deserve some attention. I'm curious: are you *guaranteed* by matlab (e.g., it's documented) to get a PD R matrix, or does it just happen in the cases you checked?  If you aren't guaranteed it, then it seems very shaky to depend on it even from matlab.  Of course, if it doesn't mention it in the docs, then who knows if it is guaranteed---matlab is closed source, so you can't just go check the code yourself. The way I read the docs at http://www.mathworks.com/help/matlab/ref/qr.html:"[Q,R] = qr(A), where A is m-by-n, produces an m-by-n upper triangular matrix R and an m-by-m unitary matrix Q so that A = Q*R." it sounds like you shouldn't depend on R being PD even from matlab, even if it's worked out that way in a few of your cases. Thanks, Jason _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: qr decompostion gives negative q, r ?

 In reply to this post by Virgil Stokes On Wednesday 21 Nov 2012 01:36:08 Virgil Stokes wrote: > > I have the following matrix, A: > > array([[ 7.07106781e+02, 5.49702852e-04, 1.66675481e-19], > [ -3.53553391e+01, 7.07104659e+01, 1.66675481e-19], > [ 0.00000000e+00, -3.97555166e+00, 7.07106781e-03], > [ -7.07106781e+02, -6.48214647e-04, 1.66675481e-19], > [ 3.53553391e+01, -7.07104226e+01, 1.66675481e-19], > [ 0.00000000e+00, 3.97560687e+00, -7.07106781e-03], > [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], > [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], > [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00]]) >   > > Using np.linalg.qr(A) I get the following for R (3x3) which is > "square-root" of the covariance matrix: > > array([[ -1.00124922e+03, 4.99289918e+00, 0.00000000e+00], > [ 0.00000000e+00, -1.00033071e+02, 5.62045938e-04], > [ 0.00000000e+00, 0.00000000e+00, -9.98419272e-03]]) >   I thought this would be down to the Lapack library you have linked against, as numpy uses Lapack's [zd]gqrf internally. I know Matlab comes distributed with Intel's MKL, so I imagine Matlab would use Intel's MKL routines internally. Without knowing anything about the math, here are some test cases on Linux and Mac OS X. Looks like it's dependent on platform, from these results...   >>> a = np.array( ... ) >>> q, r = np.linalg.qr( a ) >>> r   **Linux** Intel MKL: array([[ -1.00124922e+03, 4.99289919e+00, 2.40741243e-35], [ 0.00000000e+00, -1.00033071e+02, 5.62045938e-04], [ 0.00000000e+00, 0.00000000e+00, -9.98419272e-03]])   ATLAS Lapack: array([[ -1.00124922e+03, 4.99289919e+00, -2.40741243e-35], [ 0.00000000e+00, -1.00033071e+02, 5.62045938e-04], [ 0.00000000e+00, 0.00000000e+00, -9.98419272e-03]])     **Mac OS X**: Accelerate Framework:- array([[ 1.00124922e+03, -4.99289919e+00, 2.40741243e-35], [ 0.00000000e+00, 1.00033071e+02, -5.62045938e-04], [ 0.00000000e+00, 0.00000000e+00, 9.98419272e-03]])   ATLAS Lapack:- array([[ 1.00124922e+03, -4.99289919e+00, 2.40741243e-35], [ 0.00000000e+00, 1.00033071e+02, -5.62045938e-04], [ 0.00000000e+00, 0.00000000e+00, 9.98419272e-03]])   Cheers, Alex_______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: qr decompostion gives negative q, r ?

 In reply to this post by Virgil Stokes On Tue, Nov 20, 2012 at 5:56 PM, Virgil Stokes <[hidden email]> wrote: > On 2012-11-21 01:48, Alejandro Weinstein wrote: >> On Tue, Nov 20, 2012 at 5:36 PM, Virgil Stokes <[hidden email]> wrote: >>> which is clearly not PD, since the it's 3 eigenvalues (diagonal >>> elements) are all negative. >> But why you expect R to be PD? > Because    R*R^T = P (a covariance matrix). One important reason for > using the QR factorization in the KF is to ensure that R is always PD > during the recursions. As you said, P = R * R^T, which is PD, even if R is not. Please check the definition of QR decomposition: R is _not_ required to be PD. Looking at the paper, they in fact use P = R * R ^ T, as in eq. (1). They never use R alone. So the fact that R is not PD should not be an issue. Can you show your code? I'm curious to see how the fact that R is not PD makes a difference. Alejandro. _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: qr decompostion gives negative q, r ?

Open this post in threaded view
|

## Re: qr decompostion gives negative q, r ?

 In reply to this post by Jason Grout-5 On 21-Nov-2012 02:29, Jason Grout wrote: > On 11/20/12 6:49 PM, Virgil Stokes wrote: > >> Thanks for looking at this problem. I believe that this is a serious >> issue that does deserve some attention. > I'm curious: are you *guaranteed* by matlab (e.g., it's documented) to > get a PD R matrix, or does it just happen in the cases you checked?  If > you aren't guaranteed it, then it seems very shaky to depend on it even > from matlab.  Of course, if it doesn't mention it in the docs, then who > knows if it is guaranteed---matlab is closed source, so you can't just > go check the code yourself. > > The way I read the docs at http://www.mathworks.com/help/matlab/ref/qr.html:> > "[Q,R] = qr(A), where A is m-by-n, produces an m-by-n upper triangular > matrix R and an m-by-m unitary matrix Q so that A = Q*R." > > it sounds like you shouldn't depend on R being PD even from matlab, even > if it's worked out that way in a few of your cases. Of course you are correct Jason, and I have not checked every QR decomposition in MATLAB and numpy. However, for my particular application I do know that the R returned from MATLAB always has diagonal elements > 0, while this is not the case for my installation of numpy. > > Thanks, > > Jason > > _______________________________________________ > SciPy-User mailing list > [hidden email] > http://mail.scipy.org/mailman/listinfo/scipy-user_______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: qr decompostion gives negative q, r ?

 In reply to this post by Bugzilla from beamesleach@gmail.com On 21-Nov-2012 02:30, Alex Leach wrote: On Wednesday 21 Nov 2012 01:36:08 Virgil Stokes wrote: > > I have the following matrix, A: > > array([[ 7.07106781e+02, 5.49702852e-04, 1.66675481e-19], > [ -3.53553391e+01, 7.07104659e+01, 1.66675481e-19], > [ 0.00000000e+00, -3.97555166e+00, 7.07106781e-03], > [ -7.07106781e+02, -6.48214647e-04, 1.66675481e-19], > [ 3.53553391e+01, -7.07104226e+01, 1.66675481e-19], > [ 0.00000000e+00, 3.97560687e+00, -7.07106781e-03], > [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], > [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], > [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00]]) >   > > Using np.linalg.qr(A) I get the following for R (3x3) which is > "square-root" of the covariance matrix: > > array([[ -1.00124922e+03, 4.99289918e+00, 0.00000000e+00], > [ 0.00000000e+00, -1.00033071e+02, 5.62045938e-04], > [ 0.00000000e+00, 0.00000000e+00, -9.98419272e-03]]) >   I thought this would be down to the Lapack library you have linked against, as numpy uses Lapack's [zd]gqrf internally. I know Matlab comes distributed with Intel's MKL, so I imagine Matlab would use Intel's MKL routines internally. Without knowing anything about the math, here are some test cases on Linux and Mac OS X. Looks like it's dependent on platform, from these results...   >>> a = np.array( ... ) >>> q, r = np.linalg.qr( a ) >>> r   **Linux** Intel MKL: array([[ -1.00124922e+03, 4.99289919e+00, 2.40741243e-35], [ 0.00000000e+00, -1.00033071e+02, 5.62045938e-04], [ 0.00000000e+00, 0.00000000e+00, -9.98419272e-03]])   ATLAS Lapack: array([[ -1.00124922e+03, 4.99289919e+00, -2.40741243e-35], [ 0.00000000e+00, -1.00033071e+02, 5.62045938e-04], [ 0.00000000e+00, 0.00000000e+00, -9.98419272e-03]])     **Mac OS X**: Accelerate Framework:- array([[ 1.00124922e+03, -4.99289919e+00, 2.40741243e-35], [ 0.00000000e+00, 1.00033071e+02, -5.62045938e-04], [ 0.00000000e+00, 0.00000000e+00, 9.98419272e-03]])   ATLAS Lapack:- array([[ 1.00124922e+03, -4.99289919e+00, 2.40741243e-35], [ 0.00000000e+00, 1.00033071e+02, -5.62045938e-04], [ 0.00000000e+00, 0.00000000e+00, 9.98419272e-03]])   Cheers, Alex ```_______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user ``` Very interesting Alex --- Guess I should get a Mac :-) How can I force the use of the Lapack (on my Win32 system) that is being used by the Mac OS X? _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: qr decompostion gives negative q, r ?

 Shor answer: you shouldn't! This time, it will work, next time it won't. You must never depend on an implementation detail as it is the case here. Add a post processing step, and it will be allright on all platforms and also in all languages. Cheers,2012/11/21 Virgil Stokes On 21-Nov-2012 02:30, Alex Leach wrote: On Wednesday 21 Nov 2012 01:36:08 Virgil Stokes wrote: > > I have the following matrix, A: > > array([[ 7.07106781e+02, 5.49702852e-04, 1.66675481e-19], > [ -3.53553391e+01, 7.07104659e+01, 1.66675481e-19], > [ 0.00000000e+00, -3.97555166e+00, 7.07106781e-03], > [ -7.07106781e+02, -6.48214647e-04, 1.66675481e-19], > [ 3.53553391e+01, -7.07104226e+01, 1.66675481e-19], > [ 0.00000000e+00, 3.97560687e+00, -7.07106781e-03], > [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], > [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], > [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00]]) >   > > Using np.linalg.qr(A) I get the following for R (3x3) which is > "square-root" of the covariance matrix: > > array([[ -1.00124922e+03, 4.99289918e+00, 0.00000000e+00], > [ 0.00000000e+00, -1.00033071e+02, 5.62045938e-04], > [ 0.00000000e+00, 0.00000000e+00, -9.98419272e-03]]) >   I thought this would be down to the Lapack library you have linked against, as numpy uses Lapack's [zd]gqrf internally. I know Matlab comes distributed with Intel's MKL, so I imagine Matlab would use Intel's MKL routines internally. Without knowing anything about the math, here are some test cases on Linux and Mac OS X. Looks like it's dependent on platform, from these results...   >>> a = np.array( ... ) >>> q, r = np.linalg.qr( a ) >>> r   **Linux** Intel MKL: array([[ -1.00124922e+03, 4.99289919e+00, 2.40741243e-35], [ 0.00000000e+00, -1.00033071e+02, 5.62045938e-04], [ 0.00000000e+00, 0.00000000e+00, -9.98419272e-03]])   ATLAS Lapack: array([[ -1.00124922e+03, 4.99289919e+00, -2.40741243e-35], [ 0.00000000e+00, -1.00033071e+02, 5.62045938e-04], [ 0.00000000e+00, 0.00000000e+00, -9.98419272e-03]])     **Mac OS X**: Accelerate Framework:- array([[ 1.00124922e+03, -4.99289919e+00, 2.40741243e-35], [ 0.00000000e+00, 1.00033071e+02, -5.62045938e-04], [ 0.00000000e+00, 0.00000000e+00, 9.98419272e-03]])   ATLAS Lapack:- array([[ 1.00124922e+03, -4.99289919e+00, 2.40741243e-35], [ 0.00000000e+00, 1.00033071e+02, -5.62045938e-04], [ 0.00000000e+00, 0.00000000e+00, 9.98419272e-03]])   Cheers, Alex ```_______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user ``` Very interesting Alex --- Guess I should get a Mac :-) How can I force the use of the Lapack (on my Win32 system) that is being used by the Mac OS X? _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user -- Information System Engineer, Ph.D.Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucherMusic band: http://liliejay.com/ _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: qr decompostion gives negative q, r ?

 In reply to this post by Alejandro Weinstein-3 On 21-Nov-2012 03:47, Alejandro Weinstein wrote: ```On Tue, Nov 20, 2012 at 5:56 PM, Virgil Stokes [hidden email] wrote: ``` ```On 2012-11-21 01:48, Alejandro Weinstein wrote: ``` ```On Tue, Nov 20, 2012 at 5:36 PM, Virgil Stokes [hidden email] wrote: ``` ```which is clearly not PD, since the it's 3 eigenvalues (diagonal elements) are all negative. ``` ```But why you expect R to be PD? ``` ```Because R*R^T = P (a covariance matrix). One important reason for using the QR factorization in the KF is to ensure that R is always PD during the recursions. ``` ```As you said, P = R * R^T, which is PD, even if R is not. Please check the definition of QR decomposition: R is _not_ required to be PD.``` Yes, again Alejandro I know this --- my statement was really not correct (sloppy on my part). ``` Looking at the paper, they in fact use P = R * R ^ T, as in eq. (1). They never use R alone. So the fact that R is not PD should not be an issue. ``` But, again, it is an issue for the algorithm given in Table 3 (p. 2248 of paper).  Look at step 8. and equation (30). As stated in this step "The square-root of the filtered state-error covariance" is returned as from step 5. where a QR decomposition is performed for the triangularization. The matrix must have diagonal elements > 0  (I leave this to you to think about). When I perform step 5. with MATLAB, I always get a that has diagonal elements > 0 for my application (which is satisfying). When I perform step 5. with numpy, for the same matrix on the RHS of (27) I do not always get diagonal elements > 0  for --- herein lies the problem that I have been trying to explain. ```Can you show your code? I'm curious to see how the fact that R is not PD makes a difference.``` You should be able to write a few lines of code to test this yourself. If this problem does not occur on your system then please tell me what version of Scipy/numpy you have installed and on what platform. _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: qr decompostion gives negative q, r ?

 On Wed, Nov 21, 2012 at 9:34 AM, Virgil Stokes wrote: When I perform step 5. with MATLAB, I always get a that has diagonal elements > 0 for my application (which is satisfying). When I perform step 5. with numpy, for the same matrix on the RHS of (27) I do not always get diagonal elements > 0  for --- herein lies the problem that I have been trying to explain.http://www.mathworks.es/es/help/matlab/ref/qr.htmlThere is no reference to the positiveness of r, nor it is required by the definition of QR decomposition, therefore it is wrong to assume it is true. It may be in your current examples, but you don't have guaranteed that you will find another A matrix where this will not be true. Also, the portability of your code is tricky, as another person, with a different version of MATLAB could have different results, even if it is not documented (because the change would not affect the API). The same holds true for Mathematica.http://reference.wolfram.com/mathematica/ref/QRDecomposition.htmlAnd even, in one of the examples, the matrix r is negative. If you want to make sure, just use a qr function that checks for the sign and reverts it, if necesary. _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: qr decompostion gives negative q, r ?

 In reply to this post by Virgil Stokes Den 21. nov. 2012 kl. 09:34 skrev Virgil Stokes <[hidden email]>: But, again, it is an issue for the algorithm given in Table 3 (p. 2248 of paper).  Look at step 8. and equation (30). As stated in this step "The square-root of the filtered state-error covariance" is returned as from step 5. where a QR decomposition is performed for the triangularization. The matrix must have diagonal elements > 0  (I leave this to you to think about). When I perform step 5. with MATLAB, I always get a that has diagonal elements > 0 for my application (which is satisfying). When I perform step 5. with numpy, for the same matrix on the RHS of (27) I do not always get diagonal elements > 0  for --- herein lies the problem that I have been trying to explain.There is nothing in the definition of QR that says R must be Positive definite.NumPy will give you a valid QR fatorization computed with LAPACK, and so will Matlab. You might blame it on the LAPACK version, or Matlab might do an undocumented normalization as post-processing.Matlab is not a standard for the "right ansver" in linear algebra.Any numerical code requiring R to be PD is errorneous, even in Matlab.Then again, it is not rocket science to post-process Q and R so that R is PD.If you need to decompose a covariance matrix P = R*R' with R PD it sounds like you use QR as a Cholesky factorization. If you want a Cholesky factor of P, you know where to find it. (Not that I recommend using it.)Sturla_______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: qr decompostion gives negative q, r ?

 In reply to this post by Virgil Stokes On Wed, Nov 21, 2012 at 1:34 AM, Virgil Stokes wrote: But, again, it is an issue for the algorithm given in Table 3 (p. 2248 of paper).  Look at step 8. and equation (30).I see no Table  3, nor page 2248 in the paper you attached. Which paper are you talking about?  As stated in this step "The square-root of the filtered state-error covariance" is returned as from step 5. where a QR decomposition is performed for the triangularization. The matrix must have diagonal elements > 0  (I leave this to you to think about). When I perform step 5. with MATLAB, I always get a that has diagonal elements > 0 for my application (which is satisfying). When I perform step 5. with numpy, for the same matrix on the RHS of (27) I do not always get diagonal elements > 0  for --- herein lies the problem that I have been trying to explain.Anyway, I would say that an algorithm that depend on the sign of the entries of R is not correct. Alejandro.  _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: qr decompostion gives negative q, r ?

 On 21.11.2012 14:22, Alejandro Weinstein wrote: > On Wed, Nov 21, 2012 at 1:34 AM, Virgil Stokes <[hidden email] > > wrote: > >     But, again, it is an issue for the algorithm given in Table 3 (p. >     2248 of paper).  Look at step 8. and equation (30). > > > I see no Table  3, nor page 2248 in the paper you attached. Which paper > are you talking about? I think what he means is that in equation 22 (page 2591) S must be PD, and thus R = S' must be PD as well. Which (AFAIK) must be an error in the paper. Too bad referees didn't catch it. They should have used Cholesky of P as S. So instead of the second equation in section IV which states that S = R', equation (6) should Cholesky factorize P to obtain S. And thus S would be PD for equation 22. In conclusion, there is an error in the paper which was silenced by Matlab's QR. Sturla _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: qr decompostion gives negative q, r ?

 In reply to this post by Alejandro Weinstein-3 On 21-Nov-2012 14:22, Alejandro Weinstein wrote: On Wed, Nov 21, 2012 at 1:34 AM, Virgil Stokes wrote: But, again, it is an issue for the algorithm given in Table 3 (p. 2248 of paper).  Look at step 8. and equation (30). I see no Table  3, nor page 2248 in the paper you attached. Which paper are you talking about?   As stated in this step "The square-root of the filtered state-error covariance" is returned as from step 5. where a QR decomposition is performed for the triangularization. The matrix must have diagonal elements > 0  (I leave this to you to think about). When I perform step 5. with MATLAB, I always get a that has diagonal elements > 0 for my application (which is satisfying). When I perform step 5. with numpy, for the same matrix on the RHS of (27) I do not always get diagonal elements > 0  for --- herein lies the problem that I have been trying to explain. Anyway, I would say that an algorithm that depend on the sign of the entries of R is not correct. Alejandro.  ```_______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user ``` I am referring to the following paper: Arasaratnam, I. and Haykin, S. (2011) Cubature Kalman Smoothers. Automatica 47, pp. 2245-2250. Sorry about the confusion _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: qr decompostion gives negative q, r ?

 In reply to this post by Sturla Molden-2 On 21-Nov-2012 15:15, Sturla Molden wrote: ```On 21.11.2012 14:22, Alejandro Weinstein wrote: ``` ```On Wed, Nov 21, 2012 at 1:34 AM, Virgil Stokes <[hidden email] [hidden email]> wrote: But, again, it is an issue for the algorithm given in Table 3 (p. 2248 of paper). Look at step 8. and equation (30). I see no Table 3, nor page 2248 in the paper you attached. Which paper are you talking about? ``` ```I think what he means is that in equation 22 (page 2591) S must be PD, and thus R = S' must be PD as well. Which (AFAIK) must be an error in the paper. Too bad referees didn't catch it. They should have used Cholesky of P as S. So instead of the second equation in section IV which states that S = R', equation (6) should Cholesky factorize P to obtain S. And thus S would be PD for equation 22. In conclusion, there is an error in the paper which was silenced by Matlab's QR. Sturla _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user ``` I am referring to the following paper: Arasaratnam, I. and Haykin, S. (2011) Cubature Kalman Smoothers. Automatica 47, pp. 2245-2250. Sorry about the confusion :-[ _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: qr decompostion gives negative q, r ?

 On 21.11.2012 16:01, Virgil Stokes wrote: > I am referring to the following paper: > Arasaratnam, I. and Haykin, S. (2011) Cubature Kalman Smoothers. > /Automatica/ *47*, pp. 2245-2250. Not the one you attached, and sorry I don't have it. Sturla _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user
Open this post in threaded view
|

## Re: qr decompostion gives negative q, r ?

 On 21.11.2012 16:14, Sturla Molden wrote: > On 21.11.2012 16:01, Virgil Stokes wrote: > >> I am referring to the following paper: >> Arasaratnam, I. and Haykin, S. (2011) Cubature Kalman Smoothers. >> /Automatica/ *47*, pp. 2245-2250. > > Not the one you attached, and sorry I don't have it. Ok, found it. It basically has the same error as I pointed out. It needs a PD lower triangular matrix, which is not returned by QR but Cholesky. Sturla _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user