I created a number of JIRAs that would make sense to implement
initially and attached them to
https://issues.apache.org/jira/browse/ARROW-1055. The first goal will
be to simplify transmitting a sequence of Arrow record batches that
live on the GPU with zero-copy IPC from one process to another.
On
To motivate the use case, the folks in GOAI are building applications
with multiple components which interact with the GPU.
For example: MapD (GPU database) allocates GPU memory, hands off to
Python. Python can then decref and cudaFree on the device. Perhaps
Python then uses cudaMalloc and wishes
That makes a lot of sense. In some contexts it could make sense to run
multiple Plasma stores per machine (possibly for different devices or
different NUMA zones). Though that could make it slightly harder to take
advantage of faster GPU to GPU communication.
On Wed, Aug 16, 2017 at 2:01 PM Philip
One observation here is that as far as I know shared memory is not
typically used between multiple gpus and on a single gpu there is already a
unified shared address space that each cuda thread can access.
One reasonable extension of the APIs and facilities given these limitations
would be the fol
One idea is whether the Plasma object store could be extended to
support devices other than POSIX shared memory, like GPU device memory
(or multiple GPUs on a single host).
Philipp or Robert or any of the people who know the Plasma code best,
any idea how this might be approached? It would have to
hi all,
A group of companies have created a project called the GPU Open
Analytics Initiative (GOAI), with the purpose of creating open source
software and specifications for analytics on GPU.
So far, they have focused on building a "GPU Data Frame", which is
effectively putting Arrow data on the