Hello thank you for so much input. I am afraid that I am fairly new to this and some of the material is above my understanding for the time being and I am not looking at anything very complex as you will see from the program I will include. I have been talking to a fellow colleague I am working with and have come down to this. Your input has been invaluable to me learning more about R thank you! 1.As an example, I do not wish to find a near pd matrix but I want to test wether it is pd or not so I know if I want it or not,as you will see on the program I want two outputs depending on if it is a pd and symmetric. So I have written this for one symmetric matrix and it works and one for a non symmetric one slightly changed but it doesnt work,if you could point out why this is going wrong that would be helpful. It cant compare the complex eigenvalues. #symmetric matrix elements<-c(1,0,0.5,-0.3,0.2,0,1,0.1,0,0,0.5,0.1,1,0.3,0.7,-0.3,0,0.3,1,0.4,0.2,0,0.7,3,1) m<-matrix(elements,nrow=5,byrow=F) sym<-isSymmetric(m) eigen(m) posdef1<-(all(eigen(m)$values > 0)) posdef1 m1<-m[c(1,2),c(1,2)] m2<-m[c(1,2),c(3,4,5)] m3<-m[c(3,4,5),c(1,2)] m4<-m[c(3,4,5),c(3,4,5)] im4<-solve(m4) (m1-m2%*%im4%*%m3)*sym*posdef1 # #not symmetric elements<-c(1,0,0.5,-0.3,0.2,0,1,0.1,0,0,0.5,0.1,1,0.3,0.7,-0.3,0,0.3,1,0.4,0.2,0,0.7,3,1) m<-matrix(elements,nrow=5,byrow=F) sym<-isSymmetric(m) sym eigen(m) posdef1<-(all(eigen(m)$values > 0)) posdef1 m1<-m[c(1,2),c(1,2)] m2<-m[c(1,2),c(3,4,5)] m3<-m[c(3,4,5),c(1,2)] m4<-m[c(3,4,5),c(3,4,5)] im4<-solve(m4) (m1-m2%*%im4%*%m3)*sym*posdef1 #
2. My colleague said why dont we use the Sylvester's Criterion and test the leading minors. Would something like that work? On Mon, Jan 31, 2011 at 9:50 AM, Spencer Graves < spencer.gra...@structuremonitoring.com> wrote: > Hi, Martin: Thank you! (not only for your responses in this email thread > but in helping create R generally and many of these functions in > particular.) Spencer > > > > On 1/31/2011 12:10 AM, Martin Maechler wrote: > >> > I think the bottom line can be summarized as >> > follows: >> >> > 1. Give up on Cholesky factors unless you have a >> > matrix you know must be symmetric and strictly positive >> > definite. (I seem to recall having had problems with chol >> > even with matrices that were theoretically positive or >> > nonnegative definite but were not because of round off. >> > However, I can't produce an example right now, so I'm not >> > sure of that.) >> >> (other respondents to this thread mentioned such scenarios, they >> are not at all uncommon) >> >> > 2. If you must test whether a matrix is >> > summetric, try all.equal(A, t(A)). From the discussion, I >> > had the impression that this might not always do what you >> > want, but it should be better than all(A==t(A)). It is >> > better still to decide from theory whether the matrix >> > should be symmetric. >> >> Hmm, yes: Exactly for this reason, R has had a *generic* function >> >> isSymmetric() >> ------------- >> for quite a while: >> In "base R" it uses all.equal() {but with tightened default tolerance}, >> but e.g., in the Matrix package, >> it "decides from theory" --- the Matrix package containing >> quite a few Matrix classes that are symmetric "by definition". >> >> So my recommendation really is to use isSymmetric(). >> >> >> > 3. Work with the Ae = eigen(A, >> > symmetric=TRUE) or eigen((A+t(A))/2, symmetric=TRUE). >> > From here, Ae$values<- pmax(Ae$values, 0) ensures that A >> > will be positive semidefinite (aka nonnegative definite). >> > If it must be positive definite, use Ae$values<- >> > pmax(Ae$values, eps), with eps>0 chosen to make it as >> > positive definite as you want. >> >> hmm, almost: The above trick has been the origin and basic >> building block of posdefify() from the sfsmisc package, >> mentioned earlier in this thread. >> But I have mentioned that there are much better algorithms >> nowadays, and Matrix::nearPD() uses one of them .. actually >> with variations on the theme aka optional arguments. >> >> >> >> > 4. To the maximum extent feasible, work with >> > Ae, not A. Prof. Ripley noted, "You can then work with >> > [this] factorization to ensure that (for example) >> > variances are always non-negative because they are always >> > computed as sums of squares. This sort of thing is done >> > in many of the multivariate analysis calculations in R >> > (e.g. cmdscale) and in well-designed packages." >> >> yes, or---as mentioned by Prof Ripley as well---compute a >> square root of the matrix {e.g. via the eigen() decomposition >> with modified eigenvalues} and work with that. >> Unfortunately, in quite a few situations you just need a >> pos.def. matrix to be passed to another R function as >> cov / cor matrix, and their, nearPD() comes very handy. >> >> >> > Hope this helps. Spencer >> >> It did, thank you, >> Martin >> >> >> >> > On 1/30/2011 3:02 AM, Alex Smith wrote: >> >> Thank you for all your input but I'm afraid I dont know >> >> what the final conclusion is. I will have to check the >> >> the eigenvalues if any are negative. Why would setting >> >> them to zero make a difference? Sorry to drag this on. >> >> >> >> Thanks >> >> >> >> On Sat, Jan 29, 2011 at 9:00 PM, Prof Brian >> >> Ripley<rip...@stats.ox.ac.uk>wrote: >> >> >> >>> On Sat, 29 Jan 2011, David Winsemius wrote: >> >>> >> >>> >> >>>> On Jan 29, 2011, at 12:17 PM, Prof Brian Ripley wrote: >> >>>> >> >>>> On Sat, 29 Jan 2011, David Winsemius wrote: >> >>>>> >> >>> On Jan 29, 2011, at 10:11 AM, David Winsemius wrote: >>>>>>> >>>>>> >>>>>> >> >>> On Jan 29, 2011, at 9:59 AM, John Fox wrote: >>>>>>> >>>>>> >>>>>>>> Dear David and Alex, I'd be a little careful about >> >>>>>>>> testing exact equality as in all(M == t(M) and >> >>>>>>>> careful as well about a test such as >> >>>>>>>> all(eigen(M)$values> 0) since real arithmetic on a >> >>>>>>>> computer can't be counted on to be exact. >> >>>>>>>> >> >>>>>>> Which was why I pointed to that thread from 2005 and >> >>>>>>> the existing work that had been put into >> >>>>>>> packages. If you want to substitute all.equal for >> >>>>>>> all, there might be fewer numerical false alarms, >> >>>>>>> but I would think there could be other potential >> >>>>>>> problems that might deserve warnings. >> >>>>>>> >> >>> In addition to the two "is." functions cited earlier there >>>>>>> >>>>>> >>>>>>> is also a >> >>> "posdefify" function by Maechler in the sfsmisc package:" >>>>>>> >>>>>> >>>>>>> Description : >> >>> From a matrix m, construct a "close" positive definite >>>>>>> >>>>>> >>>>>>> one." >> >>>>>> >> >>>>> But again, that is not usually what you want. There >> >>>>> is no guarantee that the result is positive-definite >> >>>>> enough that the Cholesky decomposition will work. >> >>>>> >> >>>> I don't see a Cholesky decomposition method being used >> >>>> in that function. It appears to my reading to be >> >>>> following what would be called an eigendecomposition. >> >>>> >> >>> Correct, but my point is that one does not usually want >> >>> a >> >>> >> >>> "close" positive definite one >> >>> >> >>> but a 'square root'. >> >>> >> >>> >> >>> >> >>>> -- >> >>> David. >>>>> >>>>> >>>>> Give up on Cholesky factors unless you have a matrix you know must be >>>>> >>>>>> symmetric and strictly positive definite, and use the >>>>>> eigendecomposition >>>>>> instead (setting negative eigenvalues to zero). You can then work >>>>>> with the >>>>>> factorization to ensure that (for example) variances are always >>>>>> non-negative >>>>>> because they are always computed as sums of squares. >>>>>> >>>>>> This sort of thing is done in many of the multivariate analysis >>>>>> calculations in R (e.g. cmdscale) and in well-designed packages. >>>>>> >>>>>> >>>>>> -- >>>>>>> David. >>>>>>> >>>>>>> On Jan 29, 2011, at 7:58 AM, David Winsemius wrote: >>>>>>>> >>>>>>>>> On Jan 29, 2011, at 7:22 AM, Alex Smith wrote: >>>>>>>>>>> >>>>>>>>>>> Hello I am trying to determine wether a given matrix is >>>>>>>>>>>> symmetric >>>>>>>>>>>> and >>>>>>>>>>>> positive matrix. The matrix has real valued elements. >>>>>>>>>>>> I have been reading about the cholesky method and another method >>>>>>>>>>>> is >>>>>>>>>>>> to find the eigenvalues. I cant understand how to implement >>>>>>>>>>>> either >>>>>>>>>>>> of >>>>>>>>>>>> the two. Can someone point me to the right direction. I have >>>>>>>>>>>> used >>>>>>>>>>>> ?chol to see the help but if the matrix is not positive definite >>>>>>>>>>>> it >>>>>>>>>>>> comes up as error. I know how to the get the eigenvalues but how >>>>>>>>>>>> can >>>>>>>>>>>> I then put this into a program to check them as the just come up >>>>>>>>>>>> with >>>>>>>>>>>> $values. >>>>>>>>>>>> Is checking that the eigenvalues are positive enough to >>>>>>>>>>>> determine >>>>>>>>>>>> wether the matrix is positive definite? >>>>>>>>>>>> >>>>>>>>>>>> That is a fairly simple linear algebra fact that googling or >>>>>>>>>>> pulling >>>>>>>>>>> out a standard reference should have confirmed. >>>>>>>>>>> >>>>>>>>>>> Just to be clear (since on the basis of some off-line >>>>>>>>>> communications >>>>>>>>>> it >>>>>>>>>> did not seem to be clear): A real, symmetric matrix is Hermitian >>>>>>>>>> (and >>>>>>>>>> therefore all of its eigenvalues are real). Further, it is >>>>>>>>>> positive- >>>>>>>>>> definite if and only if its eigenvalues are all positive. >>>>>>>>>> qwe<-c(2,-1,0,-1,2,-1,0,1,2) >>>>>>>>>> q<-matrix(qwe,nrow=3) >>>>>>>>>> isPosDef<- function(M) { if ( all(M == t(M) ) ) { # first test >>>>>>>>>> symmetric-ity >>>>>>>>>> if ( all(eigen(M)$values> 0) ) >>>>>>>>>> {TRUE} >>>>>>>>>> else {FALSE} } # >>>>>>>>>> else {FALSE} # not symmetric >>>>>>>>>> >>>>>>>>>> } >>>>>>>>>> >>>>>>>>>> isPosDef(q) >>>>>>>>>>> >>>>>>>>>>> [1] FALSE >>>>>>>>>> >>>>>>>>>> m >>>>>>>>>>> >>>>>>>>>>>> [,1] [,2] [,3] [,4] [,5] >>>>>>>>>>>> [1,] 1.0 0.0 0.5 -0.3 0.2 >>>>>>>>>>>> [2,] 0.0 1.0 0.1 0.0 0.0 >>>>>>>>>>>> [3,] 0.5 0.1 1.0 0.3 0.7 >>>>>>>>>>>> [4,] -0.3 0.0 0.3 1.0 0.4 >>>>>>>>>>>> [5,] 0.2 0.0 0.7 0.4 1.0 >>>>>>>>>>>> >>>>>>>>>>>> isPosDef(m) >>>>>>>>>>> >>>>>>>>>>> [1] TRUE >>>>>>>>>> You might want to look at prior postings by people more >>>>>>>>>> knowledgeable >>>>>>>>>> than >>>>>>>>>> me: >>>>>>>>>> http://finzi.psych.upenn.edu/R/Rhelp02/archive/57794.html >>>>>>>>>> Or look at what are probably better solutions in available >>>>>>>>>> packages: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> http://finzi.psych.upenn.edu/R/library/corpcor/html/rank.condition.html >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> http://finzi.psych.upenn.edu/R/library/matrixcalc/html/is.positive.definit >>>>>>>>>> e.html >>>>>>>>>> -- >>>>>>>>>> David. >>>>>>>>>> >>>>>>>>>> this is the matrix that I know is positive definite. >>>>>>>>>>> >>>>>>>>>>>> eigen(m) >>>>>>>>>>>> $values >>>>>>>>>>>> [1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228 >>>>>>>>>>>> $vectors >>>>>>>>>>>> [,1] [,2] [,3] [,4] [,5] >>>>>>>>>>>> [1,] -0.32843233 0.69840166 0.080549876 0.44379474 >>>>>>>>>>>> 0.44824689 >>>>>>>>>>>> [2,] -0.06080335 0.03564769 -0.993062427 -0.01474690 >>>>>>>>>>>> 0.09296096 >>>>>>>>>>>> [3,] -0.64780034 0.12089168 -0.027187620 0.08912912 >>>>>>>>>>>> -0.74636235 >>>>>>>>>>>> [4,] -0.31765040 -0.68827876 0.007856812 0.60775962 >>>>>>>>>>>> 0.23651023 >>>>>>>>>>>> [5,] -0.60653780 -0.15040584 0.080856897 -0.65231358 >>>>>>>>>>>> 0.42123526 >>>>>>>>>>>> and this are the eigenvalues and eigenvectors. >>>>>>>>>>>> I thought of using >>>>>>>>>>>> eigen(m,only.values=T) >>>>>>>>>>>> $values >>>>>>>>>>>> [1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228 >>>>>>>>>>>> $vectors >>>>>>>>>>>> NULL >>>>>>>>>>>> m<- matrix(scan(textConnection(" >>>>>>>>>>>> >>>>>>>>>>>> 1.0 0.0 0.5 -0.3 0.2 >>>>>>>>>>> 0.0 1.0 0.1 0.0 0.0 >>>>>>>>>>> 0.5 0.1 1.0 0.3 0.7 >>>>>>>>>>> -0.3 0.0 0.3 1.0 0.4 >>>>>>>>>>> 0.2 0.0 0.7 0.4 1.0 >>>>>>>>>>> ")), 5, byrow=TRUE) >>>>>>>>>>> #Read 25 items >>>>>>>>>>> >>>>>>>>>>> m >>>>>>>>>>>> >>>>>>>>>>>> [,1] [,2] [,3] [,4] [,5] >>>>>>>>>>> [1,] 1.0 0.0 0.5 -0.3 0.2 >>>>>>>>>>> [2,] 0.0 1.0 0.1 0.0 0.0 >>>>>>>>>>> [3,] 0.5 0.1 1.0 0.3 0.7 >>>>>>>>>>> [4,] -0.3 0.0 0.3 1.0 0.4 >>>>>>>>>>> [5,] 0.2 0.0 0.7 0.4 1.0 >>>>>>>>>>> all( eigen(m)$values>0 ) >>>>>>>>>>> #[1] TRUE >>>>>>>>>>> >>>>>>>>>>> Then i thought of using logical expression to determine if there >>>>>>>>>>>> are >>>>>>>>>>>> negative eigenvalues but couldnt work. I dont know what error >>>>>>>>>>>> this >>>>>>>>>>>> is >>>>>>>>>>>> b<-(a<0) >>>>>>>>>>>> Error: (list) object cannot be coerced to type 'double' >>>>>>>>>>>> >>>>>>>>>>>> ??? where did "a" and "b" come from? >>>>>>>>>>> >>>>>>>>>>> -- >>>>>> Brian D. Ripley, rip...@stats.ox.ac.uk >>>>>> Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ >>>>>> University of Oxford, Tel: +44 1865 272861 (self) >>>>>> 1 South Parks Road, +44 1865 272866 (PA) >>>>>> Oxford OX1 3TG, UK Fax: +44 1865 272595 >>>>>> >>>>>> David Winsemius, MD >>>>> West Hartford, CT >>>>> >>>>> -- >>>> Brian D. Ripley, rip...@stats.ox.ac.uk >>>> Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ >>>> University of Oxford, Tel: +44 1865 272861 (self) >>>> 1 South Parks Road, +44 1865 272866 (PA) >>>> Oxford OX1 3TG, UK Fax: +44 1865 272595 >>>> >>> >> > > -- > Spencer Graves, PE, PhD > President and Chief Operating Officer > Structure Inspection and Monitoring, Inc. > 751 Emerson Ct. > San José, CA 95126 > ph: 408-655-4567 > > [[alternative HTML version deleted]]
______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.