I don't see Open MPI in your list of modules - looks to me like you are using 
MPICH? If so, you should send this to their mailing list.


On Jul 5, 2011, at 1:44 PM, Chaudhari, Mangesh I wrote:

> hi all, 
> 
> I m trying to run a job from external hard disk and its giving me errors my 
> output log is as follows 
> 
> 
> Currently Loaded Modulefiles:
>  1) modules                          4) mpich/mpich-mx-1.2.7..7-gcc.64
>  2) tools/torque-maui                5) tools/amber10-mx
>  3) tools/mx
> Host: node10
> Date: Tue Jul 5 15:17:32 EDT 2011
> Dir: /home/mic
> This job has allocated 8 nodes
> mnode10 mnode10 mnode10 mnode10 mnode10 mnode10 mnode10 mnode10
> [0] MPI Abort by user Aborting program !
> 
>  Unit    5 Error on OPEN: inp0                                                
>                
> mpiexec: Warning: accept_abort_conn: MPI_Abort from IP 10.11.1.10, killing 
> all.
> mpiexec: Warning: tasks 0-7 died with signal 15 (Terminated).
> mpiexec: Warning: accept_abort_conn: MPI_Abort from IP 10.11.1.10, killing 
> all.
> 
>  Unit    5 Error on OPEN: inp1                                                
>                
> [0] MPI Abort by user Aborting program !
> mpiexec: Warning: tasks 0-7 died with signal 15 (Terminated).
> mpiexec: Warning: accept_abort_conn: MPI_Abort from IP 10.11.1.10, killing 
> all.
> [0] MPI Abort by user Aborting program !
> 
>  Unit    5 Error on OPEN: inp2                                                
>                
> mpiexec: Warning: tasks 0-7 died with signal 15 (Terminated).
> 
>  Unit    5 Error on OPEN: inp3                                                
>                
> [0] MPI Abort by user Aborting program !
> mpiexec: Warning: accept_abort_conn: MPI_Abort from IP 10.11.1.10, killing 
> all.
> mpiexec: Warning: tasks 0-7 died with signal 15 (Terminated).
> 
> 
> 
> -----------------------------------------------
> 
> my script file is as follows : 
> 
> ### Number of nodes and processors per node.
> #PBS -l nodes=1:ppn=8
> #PBS -j oe
> #PBS -N GAFF_R60
> 
> #AMBERHOME="/usr/local/amber10-mx"
> 
> #Set up environment modules
> . /usr/local/Modules/3.2.6/init/bash
> module purge
> module initclear
> module load tools/amber10-mx
> module initadd tools/amber10-mx
> 
> #module output
> module list
> 
> #Job output header
> PBS_O_WORKDIR=`pwd`
> cd $PBS_O_WORKDIR
> PBS_O_HOME=/home/bk3
> echo Host: $HOSTNAME
> echo Date: $(date)
> echo Dir: $PWD
> 
> #calculate number of CPUs
> NPROCS=`wc -l < $PBS_NODEFILE`
> echo This job has allocated $NPROCS nodes
> echo `cat $PBS_NODEFILE`
> 
> #set DO_PARALLEL
> export DO_PARALLEL="mpiexec"
> #export DO_PARALLEL_1="mpirun -np 1 -machinefile $PBS_NODEFILE"
> 
> #run amber10 sander.MPI 
> 
> $DO_PARALLEL pmemd -O -i inp0 -p prmtop -c inpcrd  -o mdout0           -r 
> restrt0 -e mden0 -inf mdinfo0
> $DO_PARALLEL pmemd -O -i inp1 -p prmtop -c restrt0 -o mdout1 -x mdcrd1 -r 
> restrt1 -e mden1 -inf mdinfo1
> #
> $DO_PARALLEL pmemd -O -i inp2 -p prmtop -c restrt1 -o mdout2 -x mdcrd2 -r 
> restrt2 -e mden2 -inf mdinfo2
> #
> $DO_PARALLEL pmemd -O -i inp3 -p prmtop -c restrt2 -o mdout3 -x mdcrd3 -r 
> restrt3 -e mden3 -inf mdinfo3 -v mdvel3
> 
> 
> ------------------------------------------------
> 
> I dont know much about MPIs so donot know where exactly the problem is ...
> 
> Thanks in advance ... !!! 
> 
> 
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Reply via email to