Getting the following error if I remove  --mca btl tcp,self option from the
mpirun

kishore@cache-aware[23]; mpirun -np 2 su3imp_base.solaris
--------------------------------------------------------------------------
[[16283,1],0]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: uDAPL
  Host: cache-aware

Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
SU3 with improved KS action
Microcanonical simulation with refreshing
MIMD version 6
Machine =
R algorithm
type 0 for no prompts  or 1 for prompts
nflavors 2
nx 30
ny 30
nz 56
nt 84
iseed 1234
LAYOUT = Hypercubes, options = EVENFIRST,
[cache-aware:00758] 1 more process has sent help message
help-mpi-btl-base.txt / btl:no-nics
[cache-aware:00758] Set MCA parameter "orte_base_help_aggregate" to 0 to see
all help / error messages
NODE 1: no room for t_longlink
Termination: node 1, status = 1
NODE 0: no room for t_longlink
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode 0.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
termination: Wed Apr 28 10:23:32 2010

Termination: node 0, status = 1
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 759 on
node cache-aware exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).

Best,
Kishore Kumar Pusukuri
http://www.cs.ucr.edu/~kishore



On 28 April 2010 06:32, Jeff Squyres (jsquyres) <jsquy...@cisco.com> wrote:

> I don't know much about specmpi, but it seems like it is choosing to abort.
> Maybe the "no room for lattice" has some meaning...?
>
> -jms
> Sent from my PDA. No type good.
>
> ------------------------------
>  *From*: users-boun...@open-mpi.org <users-boun...@open-mpi.org>
> *To*: us...@open-mpi.org <us...@open-mpi.org>
> *Sent*: Wed Apr 28 01:47:01 2010
> *Subject*: [OMPI users] MPI_ABORT was invoked on rank 0 in
> communicatorMPI_COMM_WORLD with errorcode 0.
>
> Hi,
> I am trying to run SPEC MPI 2007 workload on a quad-core machine. However
> getting this error message. I also tried to use hostfile option by
> specifying localhost slots=4, but still getting the following error. Please
> help me.
>
> $mpirun  --mca btl tcp,sm,self -np 4 su3imp_base.solaris
> SU3 with improved KS action
> Microcanonical simulation with refreshing
> MIMD version 6
> Machine =
> R algorithm
> type 0 for no prompts  or 1 for prompts
> nflavors 2
> nx 30
> ny 30
> nz 56
> nt 84
> iseed 1234
> LAYOUT = Hypercubes, options = EVENFIRST,
> NODE 0: no room for lattice
> termination: Tue Apr 27 23:41:44 2010
>
> Termination: node 0, status = 1
> --------------------------------------------------------------------------
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> with errorcode 0.
>
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending on
> exactly when Open MPI kills them.
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> mpirun has exited due to process rank 0 with PID 17239 on
> node cache-aware exiting without calling "finalize". This may
> have caused other processes in the application to be
> terminated by signals sent by mpirun (as reported here).
>
>
> Best,
> Kishore Kumar Pusukuri
> http://www.cs.ucr.edu/~kishore <http://www.cs.ucr.edu/%7Ekishore>
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to