On Mar 18, 2009, at 7:11 , Ted Byers wrote:
of thing I did when writing code to run on a supercomputer supporting vector algebra decades ago). With ITT, if Lapack was rewritten to take advantage of it, much of the code would look quite different from what it does today. Of course, if you're already using it, I might as well shut up and go away. ;-) I am just learning to use it, as I am just learning to use R, so I am afraid I can't offer much more info than this, though.
For the purpose of matrix computations with BLAS/Lapack, use an optimized, multithreaded BLAS implementation. That way you use the multiple cores for "free" when you do matrix algebra, which is often (but not always) the heavy part of a computation. People have been doing this for a long time on OS X where it is default to use an implementation provided by Apple. I don't know exactly how to set this up for Linux or Windows.
Kasper ______________________________________________ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel