You want "-bind-to socket -slot-list=0,2,4,6" Or if you want each process bound to a single core on the socket, then change “socket” to “core” in the above
As for host/rankfile - we do indeed support just asking for “the next empty node” in the syntax, though it depends on the OMPI version you are using (it’s in the 1.8 series, but not 1.6, IIRC) > On Dec 6, 2015, at 9:07 AM, Carl Ponder <cpon...@nvidia.com> wrote: > > I'm trying to run a multi-node job but I want to map all of the processes to > cores on socket #0 only. > I'm having a hard time figuring out how to do this, the obvious combinations > mpirun -n 8 -npernode 4 -report-bindings ... > mpirun -n 8 -npernode 4 --map-by core -report-bindings ... > mpirun -n 8 -npernode 4 -cpu-set S0 -report-bindings ... > mpirun -n 8 --map-by ppr:4:node,ppr:4:socket -report-bindings ... > mpirun -n 8 -npernode 4 -bind-to slot=0:0,2,4,6 -report-bindings ... > mpirun -n 8 -npernode 4 -bind-to slot=0:0,0:2,0:4,0:6 -report-bindings ... > mpirun -n 8 -npernode 4 --map-by core:PE=2 -bind-to core -report-bindings ... > all are reported as having conflicting resource requirements. > Is there a way to specify this on the command-line? > > I've looked at the docs on hostfiles & rankfiles, and it looks like they > require me to hard-code the names of all the nodes I'm using. > To me, this doesn't make sense on modern clusters, why don't they just > associate an index with each assigned node? > That way the mapping files would be agnostic of the actual node names. > Thanks, > > Carl > > This email message is for the sole use of the intended recipient(s) and may > contain confidential information. Any unauthorized review, use, disclosure > or distribution is prohibited. If you are not the intended recipient, please > contact the sender by reply email and destroy all copies of the original > message. > _______________________________________________ > users mailing list > us...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2015/12/28135.php