Can you please provide more details on your config, how test are performed and the results ?

to be fair, you should only compare cases in which mpi tasks are bound to the same sockets.

for example, if socket0 has core[0-7] and socket1 has core[8-15]

it is fair to compare {task0,task1} bound on

{0,8}, {[0-1],[8-9]}, {[0-7],[8-15]}

but it is unfair to compare

{0,1} and {0,8} or {[0-7],[8-15]}

since {0,1} does not involve traffic on the QPI, but {0,8} does.

depending on the btl you are using, it might involve or not an other "helper" thread. if your task is bound on one core, and assuming there is no SMT, then the task and the helper do time sharing. but if the task is bound on more than one core, then the task and the helper run in parallel.


Cheers,

Gilles
On 6/23/2016 1:21 PM, Saliya Ekanayake wrote:
Hi,

I am trying to understand this peculiar behavior where the communication time in OpenMPI changes depending on the number of process elements (cores) the process is bound to.

Is this expected?

Thank you,
saliya

--
Saliya Ekanayake
Ph.D. Candidate | Research Assistant
School of Informatics and Computing | Digital Science Center
Indiana University, Bloomington



_______________________________________________
users mailing list
us...@open-mpi.org
Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2016/06/29523.php

Reply via email to