Neanderthal seems very cool.  You've clearly put a *lot* of work into 
this.  I for one am thankful that you've made available what is a very nice 
tool, as far as I can see.

I don't think there's necessarily a conflict concerning core.matrix, 
though.  You may not want to write a core.matrix wrapper for Neanderthal.  
There's no reason that you must.  Someone else might want to do that; maybe 
I would be able to help with that project.  In that case, those who wanted 
to use Neanderthal via core.matrix could do so, knowing that they lose out 
on any potential advantages of writing directly to Neanderthal, and those 
who want to use Neanderthal in its original form can still do so.  I don't 
see a conflict.

In my case, I have existing code that uses core.matrix.  I wrote to 
core.matrix in part because I didn't want to have to worry about which 
implementation to write to.  I would love to try my code on Neanderthal, 
but I'm not willing to port it.  That's my problem, not yours, though.

For future projects, I could write to Neanderthal, but I also have to 
consider the possibility that there might be situations in which another 
implementation would be better for my code.  Neanderthal looks great, but 
is it always fastest for every application that uses non-tiny matrices?  
Maybe it would be for anything I would write.  I'd rather not have to 
figure that out.  I'm granting that there could be advantages to using 
Neanderthal in its native form rather than via core.matrix, but for me, 
personally, it would be simpler to use it via core.matrix if that was an 
option.  It's not your responsibility to enable that unless you wanted to 
do so, though.  What you've done is already more than enough, Dragan.


On Friday, June 19, 2015 at 3:57:32 AM UTC-5, Dragan Djuric wrote:
>
> I understand the concept core.matrix tries to achieve, and would be 
> extremely happy if I thought it would be possible, since I would be able to 
> use it and spend time on some other stuff instead of writing C, JNI, OpenCL 
> and such low-level code. 
> Thanks for the pointer to your neural networks experience and benchmark. I 
> have taken a look at the thread you started about that issue, and it 
> clearly shows what (in my opinion) is wrong is core.matrix: it is extremely 
> easy to shoot yourself in the foot with it by (unintentionally) using the 
> backing implementation in a wrong way. And, when you need to be specific an 
> exploit the strength of the implementation, core.matrix gets in your way by 
> making it more difficult in best cases and even impossible. Moreover, the 
> optimizations that you manage to achieve with one implementation, often 
> turn to be hogs with another, with "just one configuration change". 
> For example, if you look at the benchmark on the neanderthal's web site, 
> you'd see that for 512x512 matrices, matrix multiplication is 5x faster 
> with clatrix (jblas) than vectorz. Yet, in your implementation, you managed 
> to turn that 5x speedup into a 1000x slowdown (in the 500x500 case) without 
> even one change in code. Quite impressive on the core.matrix side ;) 
> I do not even claim that an unified api is not possible. I think that to 
> some extent it is. I just doubt in core.matrix eligibility for THE api in 
> numerical computing. For it makes easy things easy and hard things 
> impossible.

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to