Probably yes,
do I have a more systematic way?
Thanks
Claudio

2012/3/1, Jingcha Joba <pukkimon...@gmail.com>:
> mpirun -np 4 --host node1,node2,node1,node2 ./app
>
> Is this what you want?
>
> On Thu, Mar 1, 2012 at 10:57 AM, Claudio Pastorino <
> claudio.pastor...@gmail.com> wrote:
>
>> Dear all,
>> I apologize in advance if this is not the right list to post this. I
>> am a newcomer and please let me know if I should be sending this to
>> another list.
>>
>> I program MPI trying to do HPC parallel programs. In particular I
>> wrote a parallel code
>> for molecular dynamics simulations. The program splits the work in a
>> matrix of procs and
>> I send messages along rows and columns in an equal basis. I learnt
>> that the typical
>> arrangement of  cartesian  topology is not usually  the best option,
>> because in a matrix, let's  say of 4x4 procs   with quad procs, the
>> procs are arranged so that
>> through columns one stays inside the same quad proc and through rows
>> you are always going out to the network.  This means procs are
>> arranged as one quad per row.
>>
>> I try to explain this for a 2x2 case. The cartesian topology does this
>> assignment, typically:
>> cartesian    mpi_comm_world
>> 0,0 -->  0
>> 0,1 -->  1
>> 1,0 -->  2
>> 1,1 -->  3
>> The question is, how do I get a "user defined" assignment such as:
>> 0,0 -->  0
>> 0,1 -->  2
>> 1,0 -->  1
>> 1,1 -->  3
>>
>> ?
>>
>> Thanks in advance and I hope to have  made this more or less
>> understandable.
>> Claudio
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>

Reply via email to