> On Mar 25, 2016, at 9:40 AM, Ronald Cohen <recoh...@gmail.com> wrote:
> 
> Thank you! I will try it!
> 
> 
> What would
> -cpus-per-proc  4 -n 16
> do?

This would bind each process to 4 cores, filling each node with procs until the 
cores on that node were exhausted, to a total of 16 processes within the 
allocation.

> 
> Ron
> ---
> Ron Cohen
> recoh...@gmail.com
> skypename: ronaldcohen
> twitter: @recohen3
> 
> 
> On Fri, Mar 25, 2016 at 12:38 PM, Ralph Castain <r...@open-mpi.org> wrote:
>> Add -rank-by node to your cmd line. You’ll still get 4 procs/node, but they 
>> will be ranked by node instead of consecutively within a node.
>> 
>> 
>> 
>>> On Mar 25, 2016, at 9:30 AM, Ronald Cohen <recoh...@gmail.com> wrote:
>>> 
>>> I am using
>>> 
>>> mpirun  --map-by ppr:4:node -n 16
>>> 
>>> and this loads the processes in round robin fashion. This seems to be
>>> twice as slow for my code as loading them node by node, 4 processes
>>> per node.
>>> 
>>> How can I not load them round robin, but node by node?
>>> 
>>> Thanks!
>>> 
>>> Ron
>>> 
>>> 
>>> ---
>>> Ron Cohen
>>> recoh...@gmail.com
>>> skypename: ronaldcohen
>>> twitter: @recohen3
>>> 
>>> ---
>>> Ronald Cohen
>>> Geophysical Laboratory
>>> Carnegie Institution
>>> 5251 Broad Branch Rd., N.W.
>>> Washington, D.C. 20015
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> Link to this post: 
>>> http://www.open-mpi.org/community/lists/users/2016/03/28828.php
>> 
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post: 
>> http://www.open-mpi.org/community/lists/users/2016/03/28829.php
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2016/03/28830.php

Reply via email to