On Friday, 14 June 2013 18:15:34 UTC+1, Jason Wolfe wrote:

> Hey Mikera, 
>
> I did look at core.matrix awhile ago, but I'll take another look. 
>
> Right now, flop is just trying to make it easy to write *arbitrary* 
> array operations compactly, while minimizing  the chance of getting 
> worse-than-Java performance.  This used to be very tricky to get right 
> when flop was developed (against Clojure 1.2); the situation has 
> clearly improved since then, but there still seem to be some 
> subtleties in going fast with arrays in 1.5.1 that we are trying to 
> understand and then automate. 
>
> As I understand it, core.matrix has a much more ambitious goal of 
> abstracting over all matrix types.  This is a great goal, but I'm not 
> sure if the protocol-based implementation can give users any help 
> writing new core operations efficiently (say, making a new array with 
> c[i] = a[i] + b[i]^2 / 2) -- unless there's some clever way of 
> combining protocols with macros (hmmm). 
>

A longer term objective for core.matrix could be to allow compiling such 
expressions. Our GSoC student Maik Schünemann is exploring how to represent 
and optimised mathematical expressions in Clojure, and in theory these 
could be used to compile down to efficient low-level operations. API could 
look something like this:

;; define an expression
(def my-expression (expression [a b] (+ a (/ (* b b) 2))))

;; compile the expression for the specified matrix implementation A
(def func (compile-expression A my-expression)). 

;; now computation can be run using the pre-compiled, optimised function
(func A B)

In the case that A is a Java double array, then perhaps the flop macros 
could be the engine behind generating the compiled function?

 

> I just benchmarked core.matrix/esum, and on my machine in Clojure 
> 1.5.1 it's 2.69x slower than the Java version above, and 1.62x slower 
> than our current best Clojure version.
>

Great - happy to steal your implementation :-) 

Other core.matrix implementations are probably faster BTW: vectorz-clj is 
pure Java and has esum for the general-purpose Vector type implemented in 
exactly the same way as your fast Java example. Clatrix executes a lot of 
operations via native code using BLAS.

 

> A collaboration sounds interesting -- although right now it seems like 
> our goals are somewhat orthogonal.  What do you think? 
>

Seems like the goals are relatively complementary. I think the areas of 
potential collaboration are of three kinds:
1) Leveraging flop to get the fastest possible implementations for 
core.matrix
2) Using core.matrix to provide higher-level abstractions / operations on 
top of basic arrays
3) Sharing tools / standards / approaches where it makes sense

Clearly a significant area of overlap is that we both want the fastest 
possible operations on double arrays. I'd be delighted if we could leverage 
flop to get the best possible implementations for core.matrix. 

Another potential area of collaboration is the planned NDArray 
implementation. This will be a pure Clojure, NumPy-style, efficient 
N-dimensional array implementation. Under the hood, this will need to run 
on Java arrays. Dmitry Groshev is our GSoC student who will be working on 
this.


-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to