Dear George,

Thank you very much for your quick and clear explanation. I will take your 
words as performance guidance :)


Sincerely,
Leo

---
Yao-Lung Leo Fang
Assistant Computational Scientist
Computational Science Initiative
Brookhaven National Laboratory
Bldg. 725, Room 2-169
P.O. Box 5000, Upton, NY 11973-5000
Office: (631) 344-3265
Email: leof...@bnl.gov<mailto:leof...@bnl.gov>
Website: https://leofang.github.io/

George Bosilca <bosi...@icl.utk.edu<mailto:bosi...@icl.utk.edu>> 於 2019年6月11日 
上午11:49 寫道:

Leo,

In a UMA system having the displacement and/or recvcounts arrays on managed GPU 
memory should work, but it will incur overheads for at least 2 reasons:
1. the MPI API arguments are checked for correctness (here recvcounts)
2. the collective algorithm part that executes on the CPU uses the 
displacements and recvcounts to issue and manage communications and it 
therefore need access to both.

Moreover, as you mention your code will not be portable anymore.

  George.


On Tue, Jun 11, 2019 at 11:27 AM Fang, Leo via users 
<users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>> wrote:
Hello,


I understand that once Open MPI is built against CUDA, sendbuf/recvbuf can be 
pointers to GPU memory. I wonder whether or not the “displs" argument of the 
collective calls on variable data (Scatterv/Gatherv/etc) can also live on GPU. 
CUDA awareness isn’t part of the MPI standard (yet), so I suppose it’s worth 
asking or even documenting.

Thank you.


Sincerely,
Leo

---
Yao-Lung Leo Fang
Assistant Computational Scientist
Computational Science Initiative
Brookhaven National Laboratory
Bldg. 725, Room 2-169
P.O. Box 5000, Upton, NY 11973-5000
Office: (631) 344-3265
Email: leof...@bnl.gov<mailto:leof...@bnl.gov>
Website: 
https://leofang.github.io/<https://urldefense.proofpoint.com/v2/url?u=https-3A__leofang.github.io_&d=DwMFaQ&c=aTOVZmpUfPKZuaG9NO7J7Mh6imZbfhL47t9CpZ-pCOw&r=xdA_wfZm0r4KH07in_vhZg&m=ZLntgp7iYDJIjGZBE1vfd7Efb1Nx86-qUshdmXX7jrQ&s=fs8uZTzIhgJpzc2DbqFHkF0iUTBa3up1RQGoG2NhFKc&e=>

_______________________________________________
users mailing list
users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>
https://lists.open-mpi.org/mailman/listinfo/users<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.open-2Dmpi.org_mailman_listinfo_users&d=DwMFaQ&c=aTOVZmpUfPKZuaG9NO7J7Mh6imZbfhL47t9CpZ-pCOw&r=xdA_wfZm0r4KH07in_vhZg&m=ZLntgp7iYDJIjGZBE1vfd7Efb1Nx86-qUshdmXX7jrQ&s=elnwX-QPtVEcZ5Kcj-iBM-79g7BmazZge-5ytEI5Ayc&e=>

_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to