On Nov 9, 11:59 am, CuppoJava <[EMAIL PROTECTED]> wrote:
> You seem to know a lot about this, so I can be more specific. I'm
> using a conjugate gradient method for solving (and caching) systems of
> equations, so there's no need to explicitly form a matrix (therefore
> object allocation isn't a problem).

I'm guessing you are using the finite element method with unassembled
matrices to solve PDEs, because I can't think of another reason why
you would need to worry about the floating-point instruction count if
you are using CG iterations to solve sparse linear systems.  (The
performance of CG is normally bound by memory bandwidth unless the
sparse matrices are small enough to fit in cache.)  Object allocation
would still be a problem in that case because you still have to
multiply the element matrices by vectors (and therefore might end up
creating O(E) output vectors where E is the number of element
matrices).

> The second most processor- intensive operation is my lineary
> complementarity solver which has also had all it's matrix equations
> expanded to minimize instruction count.

When you say "instruction count" do you mean after the JIT compiler
does its work (as in "x86 instructions") or before (as in "JVM
instructions")?

> Anyway, my Java code makes heavy use of mutable primitives, and loops,
> to do this. What's the appropriate approach in Clojure for translating
> this sort of code?

For optimizing operations on primitive arithmetic types, check out
http://clojure.org/news (in particular the 02 June 2008 entry).  With
pretty much any language with dynamic types, if your code is floating-
point bound, then the most important thing is to include the right
type declarations.

Do you need to rewrite all your Java (that seems to work well for you)
or are you mainly just curious how a "native Clojure" programmer would
do it?  I've never tried writing performance-oriented numerical code
in a language like Clojure; I use Clojure for other things.  But if I
wanted to, I would first spend some time studying SISAL as it was one
of the few functional-ish (single assignment at least) programming
languages intended for high-performance computing.  (There was also
Parallel Haskell and some other languages.)  Then I would look at NESL
which is more about collections of different kinds.  Of course these
languages were implicitly parallel but the general idiom for
sequential code should be similar.

mfh


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To post to this group, send email to clojure@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/clojure?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to