If you don’t specify a slot count, then we will automatically detect the #cores 
on the machine and set the slot count to that number.

You can add —map-by node to your cmd line and that will give you the prior 
behavior as we’ll round-robin one proc on each node at a time


> On Aug 25, 2015, at 6:54 AM, Nicolas Niclausse <nicolas.niclau...@inria.fr> 
> wrote:
> 
> 
> Hello,
> 
> I'm trying to use openmpi 1.8.8 on a cluster managed by OAR, however i have
> some troubles with the default slots number.
> 
> I have reserved one core on two nodes (each has 12 cores):
> 
> # cat $OAR_NODEFILE
> nef097.inria.fr
> nef098.inria.fr
> 
> but:
> mpirun -np 2 --mca plm_rsh_agent oarsh -hostfile $OAR_NODEFILE ./NPmpi
> 
> runs only on the first node:
> 0: nef097
> 1: nef097
> Now starting the main loop
>  0:       1 bytes      7 times -->      0.00 Mbps in   12571.35 usec
> [skip]
> 
> 
> If i use a nodefile like this, it works:
> nef097.inria.fr slots=1
> nef098.inria.fr slots=1
> 
> The doc says however that the default value is 1, and openmpi 1.6.4 works
> fine (the OS is Centos 7, btw)
> 
> Am i missing something ?
> 
> -- 
> Nicolas NICLAUSSE                          Service DREAM
> INRIA Sophia Antipolis                     http://www-sop.inria.fr/
> 2004 route des lucioles - BP 93            Tel: (33/0) 4 92 38 76 93
> 06902  SOPHIA-ANTIPOLIS cedex (France)     Fax: (33/0) 4 92 38 76 02
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2015/08/27487.php

Reply via email to