Hi,
Just an idea here. Do you use cpusets within Torque ? Did you request enough cores to torque ?

Maxime Boissonneault

Le 2014-09-23 13:53, Brock Palen a écrit :
I found a fun head scratcher, with openmpi 1.8.2  with torque 5 built with TM 
support, on hereto core layouts  I get the fun thing:
mpirun -report-bindings hostname        <-------- Works
mpirun -report-bindings -np 64 hostname   <--------- Wat?
--------------------------------------------------------------------------
A request was made to bind to that would result in binding more
processes than cpus on a resource:

    Bind to:     CORE
    Node:        nyx5518
    #processes:  2
    #cpus:       1

You can override this protection by adding the "overload-allowed"
option to your binding directive.
--------------------------------------------------------------------------


I ran with --oversubscribed and got the expected host list, which matched 
$PBS_NODEFILE and was 64 entires long:

mpirun -overload-allowed -report-bindings -np 64 --oversubscribe hostname

What did I do wrong?  I'm stumped why one works one doesn't but the one that 
doesn't if your force it appears correct.


Brock Palen
www.umich.edu/~brockp
CAEN Advanced Computing
XSEDE Campus Champion
bro...@umich.edu
(734)936-1985





_______________________________________________
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2014/09/25375.php


--
---------------------------------
Maxime Boissonneault
Analyste de calcul - Calcul Québec, Université Laval
Ph. D. en physique

Reply via email to