On Thu, 2006-02-02 at 15:19 -0700, Galen M. Shipman wrote:
> By using slots=4 you are telling Open MPI to put the first 4  
> processes on the "bench1" host.
> Open MPI will therefore use shared memory to communicate between the  
> processes not Infiniband.

Well, actually not, unless I'm mistaken about that. In my
mca-params.conf I have :

rmaps_base_schedule_policy = node

That would spread processes over nodes, right ?

> You might try:
> 
> 
> mpirun -prefix /opt/ompi -wdir `pwd` -machinefile /root/machines -np  
> 2 -d xterm -e gdb PMB-MPI1

Thanks for the tip. The last time I tried this it took quite a few
attempts before getting it right. As I did not remember the magic trick,
I was somewhat reluctant to go in that direction. Since you just handed
me the recipe on a sliver plate, I'll do it.

J-C


Reply via email to