Yes that's true, if the problem could be formulated as monolothic
operations on large matrices, then Matlab will be as fast as
anything else. My current Matlab implementation, however, generates
hundreds of 'small' matrices (say 32 x 32 complex roughly) and does nasty
order cubic operations on them inside a bunch of nested for loops. The
matrices are generated on the fly as a function of these somewhat large
parameter tables. Unfortunately the number of these matrices grows
quadratically with the number of nodes in the network I am simulating.
In Matlab I believe all structures are dynamic hash tables and for loops
are not particularly efficient.
My experience has shown that Matlab simulations of this type can be made
4-5 times faster when implemented (rather painfully) into C.
The thought is that I could get some of the higher order language
capabilities of Matlab but in a more powerful setting such as Haskell,
with all of it's meta-language constructs, without having to sacrifice
too much performance over a raw C implementation.
Alberto Ruiz wrote:
If matrix operations with very large matrices are the most expensive part of
your application it is not easy to be much faster than Matlab or Octave,
which already use the optimized ATLAS BLAS and LAPACK. But you can obtain
similar performance with a much better language :)
Which matrix computations do you need?
Alberto
_______________________________________________
Haskell-Cafe mailing list
[email protected]
http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________
Haskell-Cafe mailing list
[email protected]
http://www.haskell.org/mailman/listinfo/haskell-cafe