On 2012-11-21 00:55, Charles R Harris
wrote:
Sorry, It is now 01:49 in Stockholm and I need to get a few hours of sleep. I will try to respond to further comments/questions on this topic tomorrow morning. Thanks for looking at this problem. I believe that this is a serious issue that does deserve some attention. Best, V _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Alejandro Weinstein-3
On 2012-11-21 01:48, Alejandro Weinstein wrote:
> On Tue, Nov 20, 2012 at 5:36 PM, Virgil Stokes <[hidden email]> wrote: >> Using np.linalg.qr(A) I get the following for R (3x3) which is >> "square-root" of the covariance matrix: >> >> array([[ -1.00124922e+03, 4.99289918e+00, 0.00000000e+00], >> [ 0.00000000e+00, -1.00033071e+02, 5.62045938e-04], >> [ 0.00000000e+00, 0.00000000e+00, -9.98419272e-03]]) >> >> which is clearly not PD, since the it's 3 eigenvalues (diagonal >> elements) are all negative. > But why you expect R to be PD? using the QR factorization in the KF is to ensure that R is always PD during the recursions. > The QR decomposition [1] is > > A = QR with Q^T Q = I and R upper diagonal. > > [1] http://en.wikipedia.org/wiki/QR_factorization > _______________________________________________ > SciPy-User mailing list > [hidden email] > http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Virgil Stokes
On Tue, Nov 20, 2012 at 7:36 PM, Virgil Stokes <[hidden email]> wrote:
> On 2012-11-21 00:29, Robert Kern wrote: >> On Tue, Nov 20, 2012 at 11:21 PM, Virgil Stokes <[hidden email]> wrote: >>> On 2012-11-20 23:59, Robert Kern wrote: >>>> On Tue, Nov 20, 2012 at 10:49 PM, Virgil Stokes <[hidden email]> wrote: >>>> >>>>> Ok Skipper, >>>>> Unfortunately, things are worse than I had hoped, numpy sometimes >>>>> returns the negative of the q,r and other times the same as MATLAB! >>>>> Thus, as someone has already mentioned in this discussion, the "sign" >>>>> seems to depend on the matrix being decomposed. This could be a >>>>> nightmare to track down. >>>>> >>>>> I hope that I can return to some older versions of numpy/scipy to work >>>>> around this problem until this problem is fixed. Any suggestions on how >>>>> to recover earlier versions would be appreciated. >>>> That's not going to help you. The only thing that we guarantee (or >>>> have *ever* guaranteed) is that the result is a valid QR >>>> decomposition. If you need to swap signs to normalize things to your >>>> desired convention, you will need to do that as a postprocessing step. >>> But why do I need to normalize with numpy (at least with latest >>> release); but not with MATLAB. >> Because MATLAB decided to do the normalization step for you. That's a >> valid decision. And so is ours. >> >>> A simple question for you. >>> >>> In my application MATLAB generates a sequence of QR factorizations for >>> covariance matrices in which R is always PD --- which is corect! >> That is not part of the definition of a QR decomposition. Failing to >> meet that property does not make the QR decomposition incorrect. >> >> The only thing that is incorrect is passing an arbitrary, but valid, >> QR decomposition to something that is expecting a strict *subset* of >> valid QR decompositions. > Sorry but I do not understand this... > Let me give you an example that I believe illustrates the problem in numpy > > I have the following matrix, A: > > array([[ 7.07106781e+02, 5.49702852e-04, 1.66675481e-19], > [ -3.53553391e+01, 7.07104659e+01, 1.66675481e-19], > [ 0.00000000e+00, -3.97555166e+00, 7.07106781e-03], > [ -7.07106781e+02, -6.48214647e-04, 1.66675481e-19], > [ 3.53553391e+01, -7.07104226e+01, 1.66675481e-19], > [ 0.00000000e+00, 3.97560687e+00, -7.07106781e-03], > [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], > [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], > [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00]]) > > Note, this is clearly not a covariance matrix, but, it does contain a > covariance matrix (3x3). I refer you to the paper for how this matrix > was generated. > > Using np.linalg.qr(A) I get the following for R (3x3) which is > "square-root" of the covariance matrix: > > array([[ -1.00124922e+03, 4.99289918e+00, 0.00000000e+00], > [ 0.00000000e+00, -1.00033071e+02, 5.62045938e-04], > [ 0.00000000e+00, 0.00000000e+00, -9.98419272e-03]]) > > which is clearly not PD, since the it's 3 eigenvalues (diagonal > elements) are all negative. > > Now, if I use qr(A,0) in MATLAB: > > I get the following for R (3x3) > > 1001.24922, -4.99290, 0.00000 > 0.00000, 100.03307, -0.00056 > -0.00000, 0.00000, 0.00998 > > This is obviously PD, as it should be, and gives the correct results. > Note, it is the negative of the R obtained with numpy. > > I can provide other examples in which both R's obtained are the same and > they both lead to correct results. That is, when the R's are different, > the R obtained with MATLAB is always PD and always gives the correct end > result, while the R with numpy is not PD and does not give the correct > end result. > > I hope that this helps you to understand my problem better. If there are > more details that you need then let me know what, please. To recap. No one is disputing that MATLAB does a normalization that may make *some* use cases of a QR decomposition easier. However, *both* are correct answers. Nothing in the definition of a QR decomposition says R has to be positive definite. If you want R positive definite, then, as mentioned, you'll need to check this yourself and/or make a patch to numpy's QR function to do this to make sure you get a PD R. I happen to get a PD R in your example using LAPACK 3.3.1. I believe you can ensure R positive definite like so Q,R = np.linalg.qr(A) trace_r = np.trace(R) Q *= trace_r > 0 or -1 R *= trace_r > 0 or -1 Skipper Skipper _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Virgil Stokes
On 11/20/12 6:49 PM, Virgil Stokes wrote:
> Thanks for looking at this problem. I believe that this is a serious > issue that does deserve some attention. I'm curious: are you *guaranteed* by matlab (e.g., it's documented) to get a PD R matrix, or does it just happen in the cases you checked? If you aren't guaranteed it, then it seems very shaky to depend on it even from matlab. Of course, if it doesn't mention it in the docs, then who knows if it is guaranteed---matlab is closed source, so you can't just go check the code yourself. The way I read the docs at http://www.mathworks.com/help/matlab/ref/qr.html: "[Q,R] = qr(A), where A is m-by-n, produces an m-by-n upper triangular matrix R and an m-by-m unitary matrix Q so that A = Q*R." it sounds like you shouldn't depend on R being PD even from matlab, even if it's worked out that way in a few of your cases. Thanks, Jason _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Virgil Stokes
On Wednesday 21 Nov 2012 01:36:08 Virgil Stokes wrote: > > I have the following matrix, A: > > array([[ 7.07106781e+02, 5.49702852e-04, 1.66675481e-19], > [ -3.53553391e+01, 7.07104659e+01, 1.66675481e-19], > [ 0.00000000e+00, -3.97555166e+00, 7.07106781e-03], > [ -7.07106781e+02, -6.48214647e-04, 1.66675481e-19], > [ 3.53553391e+01, -7.07104226e+01, 1.66675481e-19], > [ 0.00000000e+00, 3.97560687e+00, -7.07106781e-03], > [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], > [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], > [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00]]) >
> > Using np.linalg.qr(A) I get the following for R (3x3) which is > "square-root" of the covariance matrix: > > array([[ -1.00124922e+03, 4.99289918e+00, 0.00000000e+00], > [ 0.00000000e+00, -1.00033071e+02, 5.62045938e-04], > [ 0.00000000e+00, 0.00000000e+00, -9.98419272e-03]]) >
I thought this would be down to the Lapack library you have linked against, as numpy uses Lapack's [zd]gqrf internally. I know Matlab comes distributed with Intel's MKL, so I imagine Matlab would use Intel's MKL routines internally. Without knowing anything about the math, here are some test cases on Linux and Mac OS X. Looks like it's dependent on platform, from these results...
>>> a = np.array( ... ) >>> q, r = np.linalg.qr( a ) >>> r
**Linux** Intel MKL: array([[ -1.00124922e+03, 4.99289919e+00, 2.40741243e-35], [ 0.00000000e+00, -1.00033071e+02, 5.62045938e-04], [ 0.00000000e+00, 0.00000000e+00, -9.98419272e-03]])
ATLAS Lapack: array([[ -1.00124922e+03, 4.99289919e+00, -2.40741243e-35], [ 0.00000000e+00, -1.00033071e+02, 5.62045938e-04], [ 0.00000000e+00, 0.00000000e+00, -9.98419272e-03]])
**Mac OS X**: Accelerate Framework:- array([[ 1.00124922e+03, -4.99289919e+00, 2.40741243e-35], [ 0.00000000e+00, 1.00033071e+02, -5.62045938e-04], [ 0.00000000e+00, 0.00000000e+00, 9.98419272e-03]])
ATLAS Lapack:- array([[ 1.00124922e+03, -4.99289919e+00, 2.40741243e-35], [ 0.00000000e+00, 1.00033071e+02, -5.62045938e-04], [ 0.00000000e+00, 0.00000000e+00, 9.98419272e-03]])
Cheers, Alex _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Virgil Stokes
On Tue, Nov 20, 2012 at 5:56 PM, Virgil Stokes <[hidden email]> wrote:
> On 2012-11-21 01:48, Alejandro Weinstein wrote: >> On Tue, Nov 20, 2012 at 5:36 PM, Virgil Stokes <[hidden email]> wrote: >>> which is clearly not PD, since the it's 3 eigenvalues (diagonal >>> elements) are all negative. >> But why you expect R to be PD? > Because R*R^T = P (a covariance matrix). One important reason for > using the QR factorization in the KF is to ensure that R is always PD > during the recursions. As you said, P = R * R^T, which is PD, even if R is not. Please check the definition of QR decomposition: R is _not_ required to be PD. Looking at the paper, they in fact use P = R * R ^ T, as in eq. (1). They never use R alone. So the fact that R is not PD should not be an issue. Can you show your code? I'm curious to see how the fact that R is not PD makes a difference. Alejandro. _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by jseabold
On 21-Nov-2012 02:17, Skipper Seabold wrote:
> On Tue, Nov 20, 2012 at 7:36 PM, Virgil Stokes <[hidden email]> wrote: >> On 2012-11-21 00:29, Robert Kern wrote: >>> On Tue, Nov 20, 2012 at 11:21 PM, Virgil Stokes <[hidden email]> wrote: >>>> On 2012-11-20 23:59, Robert Kern wrote: >>>>> On Tue, Nov 20, 2012 at 10:49 PM, Virgil Stokes <[hidden email]> wrote: >>>>> >>>>>> Ok Skipper, >>>>>> Unfortunately, things are worse than I had hoped, numpy sometimes >>>>>> returns the negative of the q,r and other times the same as MATLAB! >>>>>> Thus, as someone has already mentioned in this discussion, the "sign" >>>>>> seems to depend on the matrix being decomposed. This could be a >>>>>> nightmare to track down. >>>>>> >>>>>> I hope that I can return to some older versions of numpy/scipy to work >>>>>> around this problem until this problem is fixed. Any suggestions on how >>>>>> to recover earlier versions would be appreciated. >>>>> That's not going to help you. The only thing that we guarantee (or >>>>> have *ever* guaranteed) is that the result is a valid QR >>>>> decomposition. If you need to swap signs to normalize things to your >>>>> desired convention, you will need to do that as a postprocessing step. >>>> But why do I need to normalize with numpy (at least with latest >>>> release); but not with MATLAB. >>> Because MATLAB decided to do the normalization step for you. That's a >>> valid decision. And so is ours. >>> >>>> A simple question for you. >>>> >>>> In my application MATLAB generates a sequence of QR factorizations for >>>> covariance matrices in which R is always PD --- which is corect! >>> That is not part of the definition of a QR decomposition. Failing to >>> meet that property does not make the QR decomposition incorrect. >>> >>> The only thing that is incorrect is passing an arbitrary, but valid, >>> QR decomposition to something that is expecting a strict *subset* of >>> valid QR decompositions. >> Sorry but I do not understand this... >> Let me give you an example that I believe illustrates the problem in numpy >> >> I have the following matrix, A: >> >> array([[ 7.07106781e+02, 5.49702852e-04, 1.66675481e-19], >> [ -3.53553391e+01, 7.07104659e+01, 1.66675481e-19], >> [ 0.00000000e+00, -3.97555166e+00, 7.07106781e-03], >> [ -7.07106781e+02, -6.48214647e-04, 1.66675481e-19], >> [ 3.53553391e+01, -7.07104226e+01, 1.66675481e-19], >> [ 0.00000000e+00, 3.97560687e+00, -7.07106781e-03], >> [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], >> [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], >> [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00]]) >> >> Note, this is clearly not a covariance matrix, but, it does contain a >> covariance matrix (3x3). I refer you to the paper for how this matrix >> was generated. >> >> Using np.linalg.qr(A) I get the following for R (3x3) which is >> "square-root" of the covariance matrix: >> >> array([[ -1.00124922e+03, 4.99289918e+00, 0.00000000e+00], >> [ 0.00000000e+00, -1.00033071e+02, 5.62045938e-04], >> [ 0.00000000e+00, 0.00000000e+00, -9.98419272e-03]]) >> >> which is clearly not PD, since the it's 3 eigenvalues (diagonal >> elements) are all negative. >> >> Now, if I use qr(A,0) in MATLAB: >> >> I get the following for R (3x3) >> >> 1001.24922, -4.99290, 0.00000 >> 0.00000, 100.03307, -0.00056 >> -0.00000, 0.00000, 0.00998 >> >> This is obviously PD, as it should be, and gives the correct results. >> Note, it is the negative of the R obtained with numpy. >> >> I can provide other examples in which both R's obtained are the same and >> they both lead to correct results. That is, when the R's are different, >> the R obtained with MATLAB is always PD and always gives the correct end >> result, while the R with numpy is not PD and does not give the correct >> end result. >> >> I hope that this helps you to understand my problem better. If there are >> more details that you need then let me know what, please. > To recap. No one is disputing that MATLAB does a normalization that > may make *some* use cases of a QR decomposition easier. However, > *both* are correct answers. Nothing in the definition of a QR > decomposition says R has to be positive definite. If you want R > positive definite, then, as mentioned, you'll need to check this > yourself and/or make a patch to numpy's QR function to do this to make > sure you get a PD R. I happen to get a PD R in your example using > LAPACK 3.3.1. I believe you can ensure R positive definite like so > > Q,R = np.linalg.qr(A) > trace_r = np.trace(R) > Q *= trace_r > 0 or -1 > R *= trace_r > 0 or -1 my current installation), it seems just checking R[0,0] > 0 can also be used. How can I force the use of LAPACK 3.3.1? _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Jason Grout-5
On 21-Nov-2012 02:29, Jason Grout wrote:
> On 11/20/12 6:49 PM, Virgil Stokes wrote: > >> Thanks for looking at this problem. I believe that this is a serious >> issue that does deserve some attention. > I'm curious: are you *guaranteed* by matlab (e.g., it's documented) to > get a PD R matrix, or does it just happen in the cases you checked? If > you aren't guaranteed it, then it seems very shaky to depend on it even > from matlab. Of course, if it doesn't mention it in the docs, then who > knows if it is guaranteed---matlab is closed source, so you can't just > go check the code yourself. > > The way I read the docs at http://www.mathworks.com/help/matlab/ref/qr.html: > > "[Q,R] = qr(A), where A is m-by-n, produces an m-by-n upper triangular > matrix R and an m-by-m unitary matrix Q so that A = Q*R." > > it sounds like you shouldn't depend on R being PD even from matlab, even > if it's worked out that way in a few of your cases. in MATLAB and numpy. However, for my particular application I do know that the R returned from MATLAB always has diagonal elements > 0, while this is not the case for my installation of numpy. > > Thanks, > > Jason > > _______________________________________________ > SciPy-User mailing list > [hidden email] > http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Bugzilla from beamesleach@gmail.com
On 21-Nov-2012 02:30, Alex Leach wrote:
Very interesting Alex --- Guess I should get a Mac :-) How can I force the use of the Lapack (on my Win32 system) that is being used by the Mac OS X? _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
Shor answer: you shouldn't! This time, it will work, next time it won't. You must never depend on an implementation detail as it is the case here. Add a post processing step, and it will be allright on all platforms and also in all languages.
Cheers, 2012/11/21 Virgil Stokes <[hidden email]>
-- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher Music band: http://liliejay.com/ _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Alejandro Weinstein-3
On 21-Nov-2012 03:47, Alejandro
Weinstein wrote:
Yes, again Alejandro I know this --- my statement was really not correct (sloppy on my part).On Tue, Nov 20, 2012 at 5:56 PM, Virgil Stokes [hidden email] wrote:On 2012-11-21 01:48, Alejandro Weinstein wrote:On Tue, Nov 20, 2012 at 5:36 PM, Virgil Stokes [hidden email] wrote:which is clearly not PD, since the it's 3 eigenvalues (diagonal elements) are all negative.But why you expect R to be PD?Because R*R^T = P (a covariance matrix). One important reason for using the QR factorization in the KF is to ensure that R is always PD during the recursions.As you said, P = R * R^T, which is PD, even if R is not. Please check the definition of QR decomposition: R is _not_ required to be PD. But, again, it is an issue for the algorithm given in Table 3 (p. 2248 of paper). Look at step 8. and equation (30). As stated in this step "The square-root of the filtered state-error covariance" is returned asLooking at the paper, they in fact use P = R * R ^ T, as in eq. (1). They never use R alone. So the fact that R is not PD should not be an issue. ![]() ![]() ![]() ![]() You should be able to write a few lines of code to test this yourself. If this problem does not occur on your system then please tell me what version of Scipy/numpy you have installed and on what platform.Can you show your code? I'm curious to see how the fact that R is not PD makes a difference. _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
On Wed, Nov 21, 2012 at 9:34 AM, Virgil Stokes <[hidden email]> wrote:
http://www.mathworks.es/es/help/matlab/ref/qr.html There is no reference to the positiveness of r, nor it is required by the definition of QR decomposition, therefore it is wrong to assume it is true. It may be in your current examples, but you don't have guaranteed that you will find another A matrix where this will not be true. Also, the portability of your code is tricky, as another person, with a different version of MATLAB could have different results, even if it is not documented (because the change would not affect the API). The same holds true for Mathematica. http://reference.wolfram.com/mathematica/ref/QRDecomposition.html And even, in one of the examples, the matrix r is negative. If you want to make sure, just use a qr function that checks for the sign and reverts it, if necesary. _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Virgil Stokes
There is nothing in the definition of QR that says R must be Positive definite. NumPy will give you a valid QR fatorization computed with LAPACK, and so will Matlab. You might blame it on the LAPACK version, or Matlab might do an undocumented normalization as post-processing. Matlab is not a standard for the "right ansver" in linear algebra. Any numerical code requiring R to be PD is errorneous, even in Matlab. Then again, it is not rocket science to post-process Q and R so that R is PD. If you need to decompose a covariance matrix P = R*R' with R PD it sounds like you use QR as a Cholesky factorization. If you want a Cholesky factor of P, you know where to find it. (Not that I recommend using it.) Sturla _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Virgil Stokes
On Wed, Nov 21, 2012 at 1:34 AM, Virgil Stokes <[hidden email]> wrote:
I see no Table 3, nor page 2248 in the paper you attached. Which paper are you talking about?
Anyway, I would say that an algorithm that depend on the sign of the entries of R is not correct.
Alejandro. _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
On 21.11.2012 14:22, Alejandro Weinstein wrote:
> On Wed, Nov 21, 2012 at 1:34 AM, Virgil Stokes <[hidden email] > <mailto:[hidden email]>> wrote: > > But, again, it is an issue for the algorithm given in Table 3 (p. > 2248 of paper). Look at step 8. and equation (30). > > > I see no Table 3, nor page 2248 in the paper you attached. Which paper > are you talking about? I think what he means is that in equation 22 (page 2591) S must be PD, and thus R = S' must be PD as well. Which (AFAIK) must be an error in the paper. Too bad referees didn't catch it. They should have used Cholesky of P as S. So instead of the second equation in section IV which states that S = R', equation (6) should Cholesky factorize P to obtain S. And thus S would be PD for equation 22. In conclusion, there is an error in the paper which was silenced by Matlab's QR. Sturla _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Alejandro Weinstein-3
On 21-Nov-2012 14:22, Alejandro
Weinstein wrote:
On Wed, Nov 21, 2012 at 1:34 AM, Virgil Stokes <[hidden email]> wrote:I am referring to the following paper: Arasaratnam, I. and Haykin, S. (2011) Cubature Kalman Smoothers. Automatica 47, pp. 2245-2250. Sorry about the confusion _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Sturla Molden-2
On 21-Nov-2012 15:15, Sturla Molden
wrote:
I am referring to the following paper:On 21.11.2012 14:22, Alejandro Weinstein wrote:On Wed, Nov 21, 2012 at 1:34 AM, Virgil Stokes <[hidden email] [hidden email]> wrote: But, again, it is an issue for the algorithm given in Table 3 (p. 2248 of paper). Look at step 8. and equation (30). I see no Table 3, nor page 2248 in the paper you attached. Which paper are you talking about?I think what he means is that in equation 22 (page 2591) S must be PD, and thus R = S' must be PD as well. Which (AFAIK) must be an error in the paper. Too bad referees didn't catch it. They should have used Cholesky of P as S. So instead of the second equation in section IV which states that S = R', equation (6) should Cholesky factorize P to obtain S. And thus S would be PD for equation 22. In conclusion, there is an error in the paper which was silenced by Matlab's QR. Sturla _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user Arasaratnam, I. and Haykin, S. (2011) Cubature Kalman Smoothers. Automatica 47, pp. 2245-2250. Sorry about the confusion :-[ _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
On 21.11.2012 16:01, Virgil Stokes wrote:
> I am referring to the following paper: > Arasaratnam, I. and Haykin, S. (2011) Cubature Kalman Smoothers. > /Automatica/ *47*, pp. 2245-2250. Not the one you attached, and sorry I don't have it. Sturla _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
On 21.11.2012 16:14, Sturla Molden wrote:
> On 21.11.2012 16:01, Virgil Stokes wrote: > >> I am referring to the following paper: >> Arasaratnam, I. and Haykin, S. (2011) Cubature Kalman Smoothers. >> /Automatica/ *47*, pp. 2245-2250. > > Not the one you attached, and sorry I don't have it. Ok, found it. It basically has the same error as I pointed out. It needs a PD lower triangular matrix, which is not returned by QR but Cholesky. Sturla _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
In reply to this post by Virgil Stokes
Virgil Stokes <vs <at> it.uu.se> writes:
[clip] > Now, if I use qr(A,0) in MATLAB: > > I get the following for R (3x3) > > 1001.24922, -4.99290, 0.00000 > 0.00000, 100.03307, -0.00056 > -0.00000, 0.00000, 0.00998 > > This is obviously PD, as it should be, and gives the correct results. > Note, it is the negative of the R obtained with numpy. Case in point: MATLAB R2011b, Linux x86_64 >> A=[707.106781, 0.000549702852, 1.66675481e-19;-35.3553391, 70.7104659, 1.66675481e-19;0.0, -3.97555166, 0.00707106781;-707.106781, -0.000648214647, 1.66675481e-19;35.3553391, -70.7104226, 1.66675481e-19;0.0, 3.97560687, -0.00707106781;0.0, 0.0, 0.0;0.0, 0.0, 0.0;0.0, 0.0, 0.0] >> [Q,R] = qr(A, 0); >> R R = 1.0e+03 * -1.001249219464423 0.004992899186307 0.000000000000000 0 -0.100033070889029 0.000000562045938 0 0 -0.000009984192722 The statements "MATLAB gives always positive definite R" "MATLAB does post-processing in QR" are both false. Indeed, any "normalization" of the result is not mentioned in the documentation. You simply cannot assume that the returned R is PD, because what you actually get ultimately depends on the underlying LAPACK library. -- Pauli Virtanen _______________________________________________ SciPy-User mailing list [hidden email] http://mail.scipy.org/mailman/listinfo/scipy-user |
Free forum by Nabble | Edit this page |