It is very strange but my program runs slower with any of these
choices than if IO simply use:

mpirun  -n 16
with
#PBS -l 
nodes=n013.cluster.com:ppn=4+n014.cluster.com:ppn=4+n015.cluster.com:ppn=4+n016.cluster.com:ppn=4
for example.

The timing for the latter is 165 seconds, and for
#PBS -l nodes=4:ppn=16,pmem=1gb
mpirun  --map-by ppr:4:node -n 16
it is 368 seconds.

Ron

---
Ron Cohen
recoh...@gmail.com
skypename: ronaldcohen
twitter: @recohen3


On Fri, Mar 25, 2016 at 12:43 PM, Ralph Castain <r...@open-mpi.org> wrote:
>
>> On Mar 25, 2016, at 9:40 AM, Ronald Cohen <recoh...@gmail.com> wrote:
>>
>> Thank you! I will try it!
>>
>>
>> What would
>> -cpus-per-proc  4 -n 16
>> do?
>
> This would bind each process to 4 cores, filling each node with procs until 
> the cores on that node were exhausted, to a total of 16 processes within the 
> allocation.
>
>>
>> Ron
>> ---
>> Ron Cohen
>> recoh...@gmail.com
>> skypename: ronaldcohen
>> twitter: @recohen3
>>
>>
>> On Fri, Mar 25, 2016 at 12:38 PM, Ralph Castain <r...@open-mpi.org> wrote:
>>> Add -rank-by node to your cmd line. You’ll still get 4 procs/node, but they 
>>> will be ranked by node instead of consecutively within a node.
>>>
>>>
>>>
>>>> On Mar 25, 2016, at 9:30 AM, Ronald Cohen <recoh...@gmail.com> wrote:
>>>>
>>>> I am using
>>>>
>>>> mpirun  --map-by ppr:4:node -n 16
>>>>
>>>> and this loads the processes in round robin fashion. This seems to be
>>>> twice as slow for my code as loading them node by node, 4 processes
>>>> per node.
>>>>
>>>> How can I not load them round robin, but node by node?
>>>>
>>>> Thanks!
>>>>
>>>> Ron
>>>>
>>>>
>>>> ---
>>>> Ron Cohen
>>>> recoh...@gmail.com
>>>> skypename: ronaldcohen
>>>> twitter: @recohen3
>>>>
>>>> ---
>>>> Ronald Cohen
>>>> Geophysical Laboratory
>>>> Carnegie Institution
>>>> 5251 Broad Branch Rd., N.W.
>>>> Washington, D.C. 20015
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> Link to this post: 
>>>> http://www.open-mpi.org/community/lists/users/2016/03/28828.php
>>>
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> Link to this post: 
>>> http://www.open-mpi.org/community/lists/users/2016/03/28829.php
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post: 
>> http://www.open-mpi.org/community/lists/users/2016/03/28830.php
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2016/03/28831.php

Reply via email to