Thanks and sorry, had a feeling I was missing something obvious!


On 08/22/2013 02:56 PM, Ralph Castain wrote:
You need to tell mpirun that your system doesn't have homogeneous nodes:

    --hetero-nodes        Nodes in cluster may differ in topology, so send
                                       the topology back from each node 
[Default = false]


On Aug 22, 2013, at 2:48 PM, Noah Knowles <nknow...@usgs.gov> wrote:

Hi, newb here, so sorry if this is a dumb question but I haven't found an 
answer. I am running OpenMPI 1.7.2 on a small Rocks 6.1, Bladecenter H cluster. 
I am using the bind-to-socket option on nodes with different numbers of cores 
per socket. For the sample output below, compute-0-2 has two 6-core sockets and 
compute-0-3 has two 8-core sockets.

[1,4]<stderr>:[compute-0-2.local:03268] MCW rank 4 bound to socket 0[core 0[hwt 
0]], socket 0[core 1[hwt 0]], socket 0[core 2[hwt 0]], socket 0[core 3[hwt 0]], 
socket 0[core 4[hwt 0]], socket 0[core 5[hwt 0]]: [B/B/B/B/B/B][./././././.]
[1,5]<stderr>:[compute-0-2.local:03268] MCW rank 5 bound to socket 1[core 6[hwt 
0]], socket 1[core 7[hwt 0]], socket 1[core 8[hwt 0]], socket 1[core 9[hwt 0]], 
socket 1[core 10[hwt 0]], socket 1[core 11[hwt 0]]: [./././././.][B/B/B/B/B/B]
[1,6]<stderr>:[compute-0-3.local:03816] MCW rank 6 bound to socket 0[core 0[hwt 
0]], socket 0[core 1[hwt 0]], socket 0[core 2[hwt 0]], socket 0[core 3[hwt 0]], 
socket 0[core 4[hwt 0]], socket 0[core 5[hwt 0]]: [B/B/B/B/B/B/./.][./././././././.]
[1,7]<stderr>:[compute-0-3.local:03816] MCW rank 7 bound to socket 0[core 6[hwt 
0]], socket 0[core 7[hwt 0]], socket 1[core 8[hwt 0]], socket 1[core 9[hwt 0]], 
socket 1[core 10[hwt 0]], socket 1[core 11[hwt 0]]: [././././././B/B][B/B/B/B/./././.]

Is this behavior intended? Is there any way to cause bind-to-socket to use all 
cores on a socket for the 6-core AND the 8-core nodes? Or at least to have that 
last binding not spread across cores on two sockets?
I've tried a rankfile too, but had errors-- that should probably be a separate 
thread though.

Thanks,
Noah
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to