I am using the latest versions of numpy (from
numpy-1.7.0b2-win32-superpack-python2.7.exe) and scipy (from scipy-0.11.0-win32-superpack-python2.7.exe ) on a windows 7 (32-bit) platform. I have used import numpy as np q,r = np.linalg.qr(A) and compared the results to what I get from MATLAB (R2010B) [q,r] = qr(A) The q,r returned from numpy are both the negative of the q,r returned from MATLAB for the same matrix A. I believe that theq,r returned from MATLAB are correct. Why am I getting their negative from numpy? Note, I have tried this on several different matrices --- numpy always gives the negative of MATLAB's. _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
The QR descomposition is finding two matrices with certain properties such that:
A = Q·R But, if both Q and R are multiplied by -1, (-Q)·(-R) = Q·R = A, still the same matrix. If Q is orthogonal, -Q is also. The sign is, therefore, arbitrary. On Tue, Nov 20, 2012 at 12:01 AM, Virgil Stokes <[hidden email]> wrote: > I am using the latest versions of numpy (from > numpy-1.7.0b2-win32-superpack-python2.7.exe) and scipy (from > scipy-0.11.0-win32-superpack-python2.7.exe ) on a windows 7 (32-bit) > platform. > > I have used > > import numpy as np > q,r = np.linalg.qr(A) > > and compared the results to what I get from MATLAB (R2010B) > > [q,r] = qr(A) > > The q,r returned from numpy are both the negative of the q,r returned > from MATLAB for the same matrix A. I believe that theq,r returned from > MATLAB are correct. Why am I getting their negative from numpy? > > Note, I have tried this on several different matrices --- numpy always > gives the negative of MATLAB's. > > _______________________________________________ > SciPy-User mailing list > [hidden email] > http://mail.scipy.org/mailman/listinfo/scipy-user SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
On 2012-11-20 22:33, Daπid wrote:
> The QR descomposition is finding two matrices with certain properties such that: > > A = Q·R > > But, if both Q and R are multiplied by -1, (-Q)·(-R) = Q·R = A, still > the same matrix. If Q is orthogonal, -Q is also. The sign is, > therefore, arbitrary. > > On Tue, Nov 20, 2012 at 12:01 AM, Virgil Stokes <[hidden email]> wrote: >> I am using the latest versions of numpy (from >> numpy-1.7.0b2-win32-superpack-python2.7.exe) and scipy (from >> scipy-0.11.0-win32-superpack-python2.7.exe ) on a windows 7 (32-bit) >> platform. >> >> I have used >> >> import numpy as np >> q,r = np.linalg.qr(A) >> >> and compared the results to what I get from MATLAB (R2010B) >> >> [q,r] = qr(A) >> >> The q,r returned from numpy are both the negative of the q,r returned >> from MATLAB for the same matrix A. I believe that theq,r returned from >> MATLAB are correct. Why am I getting their negative from numpy? >> >> Note, I have tried this on several different matrices --- numpy always >> gives the negative of MATLAB's. >> >> _______________________________________________ >> SciPy-User mailing list >> [hidden email] >> http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-User mailing list > [hidden email] > http://mail.scipy.org/mailman/listinfo/scipy-user I am well aware of this; but, I am using the QR decomposition for a convariance (PD matrix) and the negative R is not very useful in this case and the numpy result, IMHO should not be the default. Why is numpy/Python different from that returned by MATLAB and MATHEMATICA? This makes translations rather tricky and one begins to wonder if there are other differences. _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
On Tue, Nov 20, 2012 at 5:03 PM, Virgil Stokes <[hidden email]> wrote:
> > On 2012-11-20 22:33, Daπid wrote: > > The QR descomposition is finding two matrices with certain properties such that: > > > > A = Q·R > > > > But, if both Q and R are multiplied by -1, (-Q)·(-R) = Q·R = A, still > > the same matrix. If Q is orthogonal, -Q is also. The sign is, > > therefore, arbitrary. > > > > On Tue, Nov 20, 2012 at 12:01 AM, Virgil Stokes <[hidden email]> wrote: > >> I am using the latest versions of numpy (from > >> numpy-1.7.0b2-win32-superpack-python2.7.exe) and scipy (from > >> scipy-0.11.0-win32-superpack-python2.7.exe ) on a windows 7 (32-bit) > >> platform. > >> > >> I have used > >> > >> import numpy as np > >> q,r = np.linalg.qr(A) > >> > >> and compared the results to what I get from MATLAB (R2010B) > >> > >> [q,r] = qr(A) > >> > >> The q,r returned from numpy are both the negative of the q,r returned > >> from MATLAB for the same matrix A. I believe that theq,r returned from > >> MATLAB are correct. Why am I getting their negative from numpy? > >> > >> Note, I have tried this on several different matrices --- numpy always > >> gives the negative of MATLAB's. > >> > >> _______________________________________________ > >> SciPy-User mailing list > >> [hidden email] > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > > SciPy-User mailing list > > [hidden email] > > http://mail.scipy.org/mailman/listinfo/scipy-user > Thanks David, > I am well aware of this; but, I am using the QR decomposition for a > convariance (PD matrix) and the negative R is not very useful in this > case and the numpy result, IMHO should not be the default. > Can't you guard against this in your code? > Why is numpy/Python different from that returned by MATLAB and > MATHEMATICA? This makes translations rather tricky and one begins to > wonder if there are other differences. It can often depend on the version of the underlying LAPACK functions used (or maybe even where/how it was compiled). In my experience, I've seen linear algebra functions in MATLAB give different results up to an arbitrary sign when I know for a fact they were using the same underling LAPACK routine. I later upgraded the LAPACK that I used to build scipy and the signs agreed. I do not know if MATLAB does any kind of normalization after the fact, but you could file an issue or better yet provide a PR for the sign check in scipy if it's something you don't want to check for in your code in the future. The beauty of scipy is that you can look at the code to see why you're getting the results you're getting. You can find out the LAPACK version and then look at the helper functions that calls these routines to see what's going on. Good luck figuring that out with MATLAB, etc. Skipper Skipper _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Virgil Stokes
On Tue, Nov 20, 2012 at 3:03 PM, Virgil Stokes <[hidden email]> wrote:
What is your application? I don't see that it should matter for most things, you are just using a slightly different set of basis vectors in the q. Is orientation something you are concerned about? <snip> Chuck _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by jseabold
On 2012-11-20 23:13, Skipper Seabold wrote:
> On Tue, Nov 20, 2012 at 5:03 PM, Virgil Stokes <[hidden email]> wrote: >> On 2012-11-20 22:33, Daπid wrote: >>> The QR descomposition is finding two matrices with certain properties such that: >>> >>> A = Q·R >>> >>> But, if both Q and R are multiplied by -1, (-Q)·(-R) = Q·R = A, still >>> the same matrix. If Q is orthogonal, -Q is also. The sign is, >>> therefore, arbitrary. >>> >>> On Tue, Nov 20, 2012 at 12:01 AM, Virgil Stokes <[hidden email]> wrote: >>>> I am using the latest versions of numpy (from >>>> numpy-1.7.0b2-win32-superpack-python2.7.exe) and scipy (from >>>> scipy-0.11.0-win32-superpack-python2.7.exe ) on a windows 7 (32-bit) >>>> platform. >>>> >>>> I have used >>>> >>>> import numpy as np >>>> q,r = np.linalg.qr(A) >>>> >>>> and compared the results to what I get from MATLAB (R2010B) >>>> >>>> [q,r] = qr(A) >>>> >>>> The q,r returned from numpy are both the negative of the q,r returned >>>> from MATLAB for the same matrix A. I believe that theq,r returned from >>>> MATLAB are correct. Why am I getting their negative from numpy? >>>> >>>> Note, I have tried this on several different matrices --- numpy always >>>> gives the negative of MATLAB's. >>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> [hidden email] >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> _______________________________________________ >>> SciPy-User mailing list >>> [hidden email] >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> Thanks David, >> I am well aware of this; but, I am using the QR decomposition for a >> convariance (PD matrix) and the negative R is not very useful in this >> case and the numpy result, IMHO should not be the default. >> > Can't you guard against this in your code? > >> Why is numpy/Python different from that returned by MATLAB and >> MATHEMATICA? This makes translations rather tricky and one begins to >> wonder if there are other differences. > It can often depend on the version of the underlying LAPACK functions > used (or maybe even where/how it was compiled). In my experience, I've > seen linear algebra functions in MATLAB give different results up to > an arbitrary sign when I know for a fact they were using the same > underling LAPACK routine. I later upgraded the LAPACK that I used to > build scipy and the signs agreed. I do not know if MATLAB does any > kind of normalization after the fact, but you could file an issue or > better yet provide a PR for the sign check in scipy if it's something > you don't want to check for in your code in the future. The beauty of > scipy is that you can look at the code to see why you're getting the > results you're getting. You can find out the LAPACK version and then > look at the helper functions that calls these routines to see what's > going on. Good luck figuring that out with MATLAB, etc. > > Skipper > > Skipper > _______________________________________________ > SciPy-User mailing list > [hidden email] > http://mail.scipy.org/mailman/listinfo/scipy-user Unfortunately, things are worse than I had hoped, numpy sometimes returns the negative of the q,r and other times the same as MATLAB! Thus, as someone has already mentioned in this discussion, the "sign" seems to depend on the matrix being decomposed. This could be a nightmare to track down. I hope that I can return to some older versions of numpy/scipy to work around this problem until this problem is fixed. Any suggestions on how to recover earlier versions would be appreciated. _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
On Tue, Nov 20, 2012 at 3:49 PM, Virgil Stokes <[hidden email]> wrote:
But why is it a problem? Why is Matlab "right"? What is the property that you need to have in the decomposition? Chuck _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Charles R Harris
On 2012-11-20 23:43, Charles R Harris
wrote:
My application is the propagation of the factorized R matrix in the Kalman filter, where the QR factorization is for the covariance matrix in the KF recursions. And it does make a lot of difference! I now have found that sometimes the "sign" of the factorization switches (wrt to the MATLAB version). The MATLAB QR factorization (however it may differ from that of numpy) is consistent in the sense there is no sign switching and the results obtained from the KF are correct (this I have verified). On the other hand, it is very unlikely that one can obtain the correct results with the current implementation of the QR factorization in Python.
_______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Virgil Stokes
On Tue, Nov 20, 2012 at 10:49 PM, Virgil Stokes <[hidden email]> wrote:
> Ok Skipper, > Unfortunately, things are worse than I had hoped, numpy sometimes > returns the negative of the q,r and other times the same as MATLAB! > Thus, as someone has already mentioned in this discussion, the "sign" > seems to depend on the matrix being decomposed. This could be a > nightmare to track down. > > I hope that I can return to some older versions of numpy/scipy to work > around this problem until this problem is fixed. Any suggestions on how > to recover earlier versions would be appreciated. That's not going to help you. The only thing that we guarantee (or have *ever* guaranteed) is that the result is a valid QR decomposition. If you need to swap signs to normalize things to your desired convention, you will need to do that as a postprocessing step. -- Robert Kern _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Charles R Harris
On 2012-11-20 23:57, Charles R Harris
wrote:
I have already posted an answer to your first question, Chuck. MATLAB is correct because the results of my application (Kalman filter using QR factorization of the covariance matrix) are correct -- I have verified this. The QR factorization is used to propagate the R matrix and clearly if the "sign" of R changes in an unpredictable manner (at least I have been unable to predict the "sign" changes that occur in numpy), then the answer is unlikely to be correct. I believe that I have answered your questions. If not, then you might look at a paper/tutorial/book that discusses the "square-root Kalman filter". Then, if you have any additional questions on my application, I will try my best to answer them for you.
_______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Virgil Stokes
On Tue, Nov 20, 2012 at 3:59 PM, Virgil Stokes <[hidden email]> wrote:
That is what I suspected. However, the factorized matrices are usually U^t*D*U or U^t * U, so I think you are doing something wrong. Chuck _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Robert Kern-2
On 2012-11-20 23:59, Robert Kern wrote:
> On Tue, Nov 20, 2012 at 10:49 PM, Virgil Stokes <[hidden email]> wrote: > >> Ok Skipper, >> Unfortunately, things are worse than I had hoped, numpy sometimes >> returns the negative of the q,r and other times the same as MATLAB! >> Thus, as someone has already mentioned in this discussion, the "sign" >> seems to depend on the matrix being decomposed. This could be a >> nightmare to track down. >> >> I hope that I can return to some older versions of numpy/scipy to work >> around this problem until this problem is fixed. Any suggestions on how >> to recover earlier versions would be appreciated. > That's not going to help you. The only thing that we guarantee (or > have *ever* guaranteed) is that the result is a valid QR > decomposition. If you need to swap signs to normalize things to your > desired convention, you will need to do that as a postprocessing step. release); but not with MATLAB. A simple question for you. In my application MATLAB generates a sequence of QR factorizations for covariance matrices in which R is always PD --- which is corect! For the same application, numpy generate a sequence of QR factorizations for covariance matrices in which R is not always PD. How can I predict when I will get an R that is not PD? > > -- > Robert Kern > _______________________________________________ > SciPy-User mailing list > [hidden email] > http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Charles R Harris
On 2012-11-21 00:11, Charles R Harris
wrote:
No Chuck, You are referring to Bierman's factorization which is just one of the factorizations possible. I am using a standard and well-documented form of the so-called "square-root" Kalman filters (just Google on this and be enlightened). Again, there many papers/books that discuss the QR factorization implementation for both the Kalman filter and Kalman smoother.
_______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Virgil Stokes
On Tue, Nov 20, 2012 at 11:21 PM, Virgil Stokes <[hidden email]> wrote:
> On 2012-11-20 23:59, Robert Kern wrote: >> On Tue, Nov 20, 2012 at 10:49 PM, Virgil Stokes <[hidden email]> wrote: >> >>> Ok Skipper, >>> Unfortunately, things are worse than I had hoped, numpy sometimes >>> returns the negative of the q,r and other times the same as MATLAB! >>> Thus, as someone has already mentioned in this discussion, the "sign" >>> seems to depend on the matrix being decomposed. This could be a >>> nightmare to track down. >>> >>> I hope that I can return to some older versions of numpy/scipy to work >>> around this problem until this problem is fixed. Any suggestions on how >>> to recover earlier versions would be appreciated. >> That's not going to help you. The only thing that we guarantee (or >> have *ever* guaranteed) is that the result is a valid QR >> decomposition. If you need to swap signs to normalize things to your >> desired convention, you will need to do that as a postprocessing step. > But why do I need to normalize with numpy (at least with latest > release); but not with MATLAB. Because MATLAB decided to do the normalization step for you. That's a valid decision. And so is ours. > A simple question for you. > > In my application MATLAB generates a sequence of QR factorizations for > covariance matrices in which R is always PD --- which is corect! That is not part of the definition of a QR decomposition. Failing to meet that property does not make the QR decomposition incorrect. The only thing that is incorrect is passing an arbitrary, but valid, QR decomposition to something that is expecting a strict *subset* of valid QR decompositions. -- Robert Kern _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Virgil Stokes
On Tue, Nov 20, 2012 at 4:26 PM, Virgil Stokes <[hidden email]> wrote:
> No Chuck, > You are referring to Bierman's factorization which is just one of the > factorizations possible. I am using a standard and well-documented form of > the so-called "square-root" Kalman filters (just Google on this and be > enlightened). Again, there many papers/books that discuss the QR > factorization implementation for both the Kalman filter and Kalman smoother. Can you show the particular implementation you're using? According to Wikipedia [1] there are a few alternatives that can be classified as "square root" KF. Alejandro. [1] http://en.wikipedia.org/wiki/Kalman_filter#Square_root_form _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Virgil Stokes
On Tue, Nov 20, 2012 at 4:26 PM, Virgil Stokes <[hidden email]> wrote:
Yes I am familiar with square root Kalman filters, I've even written a few. Chuck _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Robert Kern-2
On 2012-11-21 00:29, Robert Kern wrote:
> On Tue, Nov 20, 2012 at 11:21 PM, Virgil Stokes <[hidden email]> wrote: >> On 2012-11-20 23:59, Robert Kern wrote: >>> On Tue, Nov 20, 2012 at 10:49 PM, Virgil Stokes <[hidden email]> wrote: >>> >>>> Ok Skipper, >>>> Unfortunately, things are worse than I had hoped, numpy sometimes >>>> returns the negative of the q,r and other times the same as MATLAB! >>>> Thus, as someone has already mentioned in this discussion, the "sign" >>>> seems to depend on the matrix being decomposed. This could be a >>>> nightmare to track down. >>>> >>>> I hope that I can return to some older versions of numpy/scipy to work >>>> around this problem until this problem is fixed. Any suggestions on how >>>> to recover earlier versions would be appreciated. >>> That's not going to help you. The only thing that we guarantee (or >>> have *ever* guaranteed) is that the result is a valid QR >>> decomposition. If you need to swap signs to normalize things to your >>> desired convention, you will need to do that as a postprocessing step. >> But why do I need to normalize with numpy (at least with latest >> release); but not with MATLAB. > Because MATLAB decided to do the normalization step for you. That's a > valid decision. And so is ours. > >> A simple question for you. >> >> In my application MATLAB generates a sequence of QR factorizations for >> covariance matrices in which R is always PD --- which is corect! > That is not part of the definition of a QR decomposition. Failing to > meet that property does not make the QR decomposition incorrect. > > The only thing that is incorrect is passing an arbitrary, but valid, > QR decomposition to something that is expecting a strict *subset* of > valid QR decompositions. Let me give you an example that I believe illustrates the problem in numpy I have the following matrix, A: array([[ 7.07106781e+02, 5.49702852e-04, 1.66675481e-19], [ -3.53553391e+01, 7.07104659e+01, 1.66675481e-19], [ 0.00000000e+00, -3.97555166e+00, 7.07106781e-03], [ -7.07106781e+02, -6.48214647e-04, 1.66675481e-19], [ 3.53553391e+01, -7.07104226e+01, 1.66675481e-19], [ 0.00000000e+00, 3.97560687e+00, -7.07106781e-03], [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00]]) Note, this is clearly not a covariance matrix, but, it does contain a covariance matrix (3x3). I refer you to the paper for how this matrix was generated. Using np.linalg.qr(A) I get the following for R (3x3) which is "square-root" of the covariance matrix: array([[ -1.00124922e+03, 4.99289918e+00, 0.00000000e+00], [ 0.00000000e+00, -1.00033071e+02, 5.62045938e-04], [ 0.00000000e+00, 0.00000000e+00, -9.98419272e-03]]) which is clearly not PD, since the it's 3 eigenvalues (diagonal elements) are all negative. Now, if I use qr(A,0) in MATLAB: I get the following for R (3x3) 1001.24922, -4.99290, 0.00000 0.00000, 100.03307, -0.00056 -0.00000, 0.00000, 0.00998 This is obviously PD, as it should be, and gives the correct results. Note, it is the negative of the R obtained with numpy. I can provide other examples in which both R's obtained are the same and they both lead to correct results. That is, when the R's are different, the R obtained with MATLAB is always PD and always gives the correct end result, while the R with numpy is not PD and does not give the correct end result. I hope that this helps you to understand my problem better. If there are more details that you need then let me know what, please. _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
On 2012-11-21 01:36, Virgil Stokes wrote:
> On 2012-11-21 00:29, Robert Kern wrote: >> On Tue, Nov 20, 2012 at 11:21 PM, Virgil Stokes <[hidden email]> wrote: >>> On 2012-11-20 23:59, Robert Kern wrote: >>>> On Tue, Nov 20, 2012 at 10:49 PM, Virgil Stokes <[hidden email]> wrote: >>>> >>>>> Ok Skipper, >>>>> Unfortunately, things are worse than I had hoped, numpy sometimes >>>>> returns the negative of the q,r and other times the same as MATLAB! >>>>> Thus, as someone has already mentioned in this discussion, the "sign" >>>>> seems to depend on the matrix being decomposed. This could be a >>>>> nightmare to track down. >>>>> >>>>> I hope that I can return to some older versions of numpy/scipy to >>>>> work >>>>> around this problem until this problem is fixed. Any suggestions >>>>> on how >>>>> to recover earlier versions would be appreciated. >>>> That's not going to help you. The only thing that we guarantee (or >>>> have *ever* guaranteed) is that the result is a valid QR >>>> decomposition. If you need to swap signs to normalize things to your >>>> desired convention, you will need to do that as a postprocessing step. >>> But why do I need to normalize with numpy (at least with latest >>> release); but not with MATLAB. >> Because MATLAB decided to do the normalization step for you. That's a >> valid decision. And so is ours. >> >>> A simple question for you. >>> >>> In my application MATLAB generates a sequence of QR factorizations for >>> covariance matrices in which R is always PD --- which is corect! >> That is not part of the definition of a QR decomposition. Failing to >> meet that property does not make the QR decomposition incorrect. >> >> The only thing that is incorrect is passing an arbitrary, but valid, >> QR decomposition to something that is expecting a strict *subset* of >> valid QR decompositions. > Sorry but I do not understand this... > Let me give you an example that I believe illustrates the problem in > numpy > > I have the following matrix, A: > > array([[ 7.07106781e+02, 5.49702852e-04, 1.66675481e-19], > [ -3.53553391e+01, 7.07104659e+01, 1.66675481e-19], > [ 0.00000000e+00, -3.97555166e+00, 7.07106781e-03], > [ -7.07106781e+02, -6.48214647e-04, 1.66675481e-19], > [ 3.53553391e+01, -7.07104226e+01, 1.66675481e-19], > [ 0.00000000e+00, 3.97560687e+00, -7.07106781e-03], > [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], > [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], > [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00]]) > > Note, this is clearly not a covariance matrix, but, it does contain a > covariance matrix (3x3). I refer you to the paper for how this matrix > was generated. > > Using np.linalg.qr(A) I get the following for R (3x3) which is > "square-root" of the covariance matrix: > > array([[ -1.00124922e+03, 4.99289918e+00, 0.00000000e+00], > [ 0.00000000e+00, -1.00033071e+02, 5.62045938e-04], > [ 0.00000000e+00, 0.00000000e+00, -9.98419272e-03]]) > > which is clearly not PD, since the it's 3 eigenvalues (diagonal > elements) are all negative. > > Now, if I use qr(A,0) in MATLAB: > > I get the following for R (3x3) > > 1001.24922, -4.99290, 0.00000 > 0.00000, 100.03307, -0.00056 > -0.00000, 0.00000, 0.00998 > > This is obviously PD, as it should be, and gives the correct results. > Note, it is the negative of the R obtained with numpy. > > I can provide other examples in which both R's obtained are the same > and they both lead to correct results. That is, when the R's are > different, the R obtained with MATLAB is always PD and always gives > the correct end result, while the R with numpy is not PD and does not > give the correct end result. > > I hope that this helps you to understand my problem better. If there > are more details that you need then let me know what, please. _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Alejandro Weinstein-3
On 2012-11-21 00:37, Alejandro Weinstein wrote:
> On Tue, Nov 20, 2012 at 4:26 PM, Virgil Stokes <[hidden email]> wrote: >> No Chuck, >> You are referring to Bierman's factorization which is just one of the >> factorizations possible. I am using a standard and well-documented form of >> the so-called "square-root" Kalman filters (just Google on this and be >> enlightened). Again, there many papers/books that discuss the QR >> factorization implementation for both the Kalman filter and Kalman smoother. > Can you show the particular implementation you're using? According to > Wikipedia [1] there are a few alternatives that can be classified as > "square root" KF. > > Alejandro. > > [1] http://en.wikipedia.org/wiki/Kalman_filter#Square_root_form > _______________________________________________ > SciPy-User mailing list > [hidden email] > http://mail.scipy.org/mailman/listinfo/scipy-user I have just sent an email with a paper attached that shows the method that I have implemented. Note, again I have no trouble with the MATLAB code --- it works correctly (according to the author of the paper). And, I was able to isolate the problem with the numpy/Python implementation to the QR factorization obtained from numpy.linalg.qr. _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Virgil Stokes
On Tue, Nov 20, 2012 at 5:36 PM, Virgil Stokes <[hidden email]> wrote:
> Using np.linalg.qr(A) I get the following for R (3x3) which is > "square-root" of the covariance matrix: > > array([[ -1.00124922e+03, 4.99289918e+00, 0.00000000e+00], > [ 0.00000000e+00, -1.00033071e+02, 5.62045938e-04], > [ 0.00000000e+00, 0.00000000e+00, -9.98419272e-03]]) > > which is clearly not PD, since the it's 3 eigenvalues (diagonal > elements) are all negative. But why you expect R to be PD? The QR decomposition [1] is A = QR with Q^T Q = I and R upper diagonal. [1] http://en.wikipedia.org/wiki/QR_factorization _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
Free forum by Nabble | Edit this page |