On 10/20/2015 04:14 PM, Ralph Castain wrote:
On Oct 20, 2015, at 5:47 AM, Daniel Letai <d...@letai.org.il
<mailto:d...@letai.org.il>> wrote:
Thanks for the reply,
On 10/13/2015 04:04 PM, Ralph Castain wrote:
On Oct 12, 2015, at 6:10 AM, Daniel Letai <d...@letai.org.il
<mailto:d...@letai.org.il>> wrote:
Hi,
After upgrading to 1.8.8 I can no longer see the map. When looking
at the man page for mpirun, display-map no longer exists. Is there
a way to show the map in 1.8.8 ?
I don’t know why/how it got dropped from the man page, but the
display-map option certainly still exists - do “mpirun -h” to see
the full list of options, and you’ll see it is there. I’ll ensure it
gets restored to the man page in the 1.10 series as the 1.8 series
is complete.
Thanks for clarifying,
Another issue - I'd like to map 2 process per node - 1 to each socket.
What is the current "correct" syntax? --map-by ppr:2:node doesn't
guarantee 1 per Socket. --map-by ppr:1:socket doesn't guarantee 2
per node. I assume it's something obvious, but the documentation is
somewhat lacking.
I'd like to know the general syntax - even if I have 4 socket nodes
I'd still like to map only 2 procs per node.
That’s a tough one. I’m not sure there is a way to do that right
now. Probably something we’d have to add. Out of curiosity, if you
have 4 sockets and only 2 procs, would you want each proc bound to 2
of the 4 sockets? Or are you expecting them to be bound to only 1
socket (thus leaving 2 sockets idle), or simply leave them unbound?
I have 2 pci devices (gpu) per node. I need 1 proc per socket to be
bound to that socket and "talk" to it's respective gpu, so no matter
how many sockets I have - I must distribute the procs 2 per node,
each in it's own socket (actually, each in it's own numa) and be bound.
So I expect them to be "bound to only 1 socket (thus leaving 2
sockets idle)”.
Are the gpu’s always near the same sockets for every node? If so, you
might be able to use the cpu-set option to restrict us to those
sockets, and then just "—map-by ppr:2:node —bind-to socket"
-cpu-set|--cpu-set <arg0>
Comma-separated list of ranges specifying logical
cpus allocated to this job [default: none]
I Believe this should solve the issue. So the cmdline should be
something like:
mpirun --map-by ppr:2:node --bind-to socket --cpu-set 0,2
BTW --cpu-set also absent from man page.
I might run other jobs on the idle sockets (depending on mem
utilization) but that's not an immediate concern at this time.
Combining with numa/dist to hca/dist to gpu will be very helpful too.
Definitely no way to do this one today.
Thanks,
_______________________________________________
users mailing list
us...@open-mpi.org <mailto:us...@open-mpi.org>
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post:
http://www.open-mpi.org/community/lists/users/2015/10/27860.php
_______________________________________________
users mailing list
us...@open-mpi.org <mailto:us...@open-mpi.org>
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post:
http://www.open-mpi.org/community/lists/users/2015/10/27861.php
_______________________________________________
users mailing list
us...@open-mpi.org <mailto:us...@open-mpi.org>
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post:
http://www.open-mpi.org/community/lists/users/2015/10/27898.php
_______________________________________________
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post:
http://www.open-mpi.org/community/lists/users/2015/10/27899.php