Hi Junchao,
First, for your second question, the answer is here:
https://www.mail-archive.com/users@lists.open-mpi.org/msg33279.html. I know
this because I also asked it earlier 😊 It'd be nice to have this documented in
the Q&A though.
As for your first question, I am also interested. It'd
cess to both.
Moreover, as you mention your code will not be portable anymore.
George.
On Tue, Jun 11, 2019 at 11:27 AM Fang, Leo via users
mailto:users@lists.open-mpi.org>> wrote:
Hello,
I understand that once Open MPI is built against CUDA, sendbuf/recvbuf can be
pointers to GP
Hello,
I understand that once Open MPI is built against CUDA, sendbuf/recvbuf can be
pointers to GPU memory. I wonder whether or not the “displs" argument of the
collective calls on variable data (Scatterv/Gatherv/etc) can also live on GPU.
CUDA awareness isn’t part of the MPI standard (yet),