The GPU requires native code that is executed on the GPU.  Standard linear
algebra libraries exist for this so if the API can express a standard
linear algebra routine concisely, then the GPU can be used.  General Java
code usually can't be executed on a GPU.

There is some late breaking news on this front, but the way to get
performance is generally to recognize standard idioms that have accelerated
implementations.

In Mahout, for instance, we can recognize many linear algebra operations
from idiomatic use of visitor patterns.  For instance, in this code,

      Vector u, v;
      v.assign(Functions.PLUS, u);

Mahout will recognize the call to assign as a vector addition. This is easy
with vector operations but much harder to recognize matrix operations
expressed with simple visitor patterns.



On Sun, Dec 30, 2012 at 11:26 AM, Sébastien Brisard <
sebastien.bris...@m4x.org> wrote:

> > and hence preclude vector based process operations, such as you would
> find
> > on a GPU. So if the user wanted to speed up the computation using a GPU
> > they would not be able to do it, if we base it on a single element at a
> > time visitor pattern.
> >
> >
> I fail to see how the GPU could not be used. I am no expert on GPU
> programming, but I can easily imagine a new implementation of RealVector,
> say GpuBasedRealVector, where the walkInDefaultOrder method would send
> multiple values at a time to the GPU. I've already done that for multi-core
> machines (using fork/join), and the visitor pattern was certainly not a
> limitation.
>

Reply via email to