Leo,

In a UMA system having the displacement and/or recvcounts arrays on managed
GPU memory should work, but it will incur overheads for at least 2 reasons:
1. the MPI API arguments are checked for correctness (here recvcounts)
2. the collective algorithm part that executes on the CPU uses the
displacements and recvcounts to issue and manage communications and it
therefore need access to both.

Moreover, as you mention your code will not be portable anymore.

  George.


On Tue, Jun 11, 2019 at 11:27 AM Fang, Leo via users <
users@lists.open-mpi.org> wrote:

> Hello,
>
>
> I understand that once Open MPI is built against CUDA, sendbuf/recvbuf can
> be pointers to GPU memory. I wonder whether or not the “displs" argument of
> the collective calls on variable data (Scatterv/Gatherv/etc) can also live
> on GPU. CUDA awareness isn’t part of the MPI standard (yet), so I suppose
> it’s worth asking or even documenting.
>
> Thank you.
>
>
> Sincerely,
> Leo
>
> ---
> Yao-Lung Leo Fang
> Assistant Computational Scientist
> Computational Science Initiative
> Brookhaven National Laboratory
> Bldg. 725, Room 2-169
> P.O. Box 5000, Upton, NY 11973-5000
> Office: (631) 344-3265
> Email: leof...@bnl.gov
> Website: https://leofang.github.io/
>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to