Hello,

I understand that once Open MPI is built against CUDA, sendbuf/recvbuf can be 
pointers to GPU memory. I wonder whether or not the “displs" argument of the 
collective calls on variable data (Scatterv/Gatherv/etc) can also live on GPU. 
CUDA awareness isn’t part of the MPI standard (yet), so I suppose it’s worth 
asking or even documenting.

Thank you.


Sincerely,
Leo

---
Yao-Lung Leo Fang
Assistant Computational Scientist
Computational Science Initiative
Brookhaven National Laboratory
Bldg. 725, Room 2-169
P.O. Box 5000, Upton, NY 11973-5000
Office: (631) 344-3265
Email: leof...@bnl.gov<mailto:leof...@bnl.gov>
Website: https://leofang.github.io/

_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to