>
>
>>
> There is a set of BLAS-like API functions in core.matrix already. See: 
> https://github.com/mikera/core.matrix/blob/develop/src/main/clojure/clojure/core/matrix/blas.cljc
>

GitHub history says they were added 7 days ago. Nevermind that they just 
delegate, so the only BLAS-y thing is the 4 method names taken out  of 
Neanderthal (BLAS has a bit more stuff than that), but why you reinvented 
the wheel instead just creating core.matrix (or vectorz) implementation of 
Neanderthal's API? 
 

> Having said that, I don't personally think the BLAS API is a particularly 
> good fit for Clojure (it depends on mutability, and I think it is a pretty 
> clumsy design by modern API standards). But if you simply want to copy the 
> syntax, it's certainly trivial to do in core.matrix.
>

If you look at Neanderthal's API you'll see that I took a great care to 
make it fit into Clojure, which I think I succeeded. 
Regarding mutability:
1) Neanderthal provides both mutable and pure functions
2) Trying to do numeric computing without mutability (and primitives) for 
anything than toy problems is... well, sometimes it is better to plant a 
Sequoia seed, wait for the tree to grow, cut it, make an abacus and compute 
with it... 


> An important point to note is that they don't do the same thing at all: 
> core.matrix is an API providing an general purpose array programming 
> abstraction with pluggable implementation support. Neanderthal is a 
> specific implementation tied to native BLAS/ATLAS. They should ideally work 
> in harmony, not be seen as alternatives.
>

* Neanderthal has an agnostic api and it is not in any way tied to 
BLAS/ATLAS *
Neanderthal also has pluggable implementation support - and it already 
provides two high-performance implementations that elegantly unify two very 
different *hardware* platforms: CPU and GPU. And it does it quite 
transparently (more about that can be read here: 
http://neanderthal.uncomplicate.org/articles/tutorial_opencl.html)

>
> Neanderthal is more closely comparable to Vectorz, which *is* a matrix 
> implementation (and I think it matches or beats Neanderthal in performance 
> for virtually every operation *apart* from large matrix multiplication for 
> which ATLAS is obviously fantastic for).
>
 
You think without having tried that. I tried that, and *Neanderthal is 
faster for virtually *ALL* operations, even 1D. Yesterday I did a quick 
measure of asum (1D vector operation), for example, and neanderthal was, if 
I remember correctly, * 9x faster than Vectorz in that simple summing *

. I even pointed to you that Neanderthal is faster even in ALL those cases 
when you raised that argument the last time, but you seem to ignore it.
 

> If anyone has other ideas / a better strategy I'd love to hear, and I 
> welcome a good constructive debate. I'm not precious about any of my own 
> contributions. But I do genuinely think this is the best way forward for 
> Clojure data science overall, based on where we are right now.
>

I would like to propose a strategy where more love is given to the actual 
libraries (incanter is rather indisposed and stagnant IMO) that solve 
actual problems instead of trying to unify what does not exist (yet!). 
Then, people will use what works best, and what does not work will not be 
important. That's how things go in open-source...
 

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to