On Apr 12, 2006, at 8:59 PM, Jeff Squyres (jsquyres) wrote:
FWIW, the "has a different size..." errors means that you may not
have been linking against the shared libraries that you thought you
were. This typically means that the executable expected to find an
object in a library of a giv
The error message is coming from all nodes.
I explicitly add the path of Intel shared library to LD_LIBRARY_PATH on my
mpiexec command, in addition to it being added in my shell startup file.
I make a batch request to PBS. The Intel shared library is on a common file
system across compute nod
If you are using PBS, the environment of where you ran "qsub" is
automatically copied out to the first node in your job where your script
is run.
Can you send your torque job script?
> -Original Message-
> From: users-boun...@open-mpi.org
> [mailto:users-boun...@open-mpi.org] On Behalf
#PBS -l walltime=0:01:00
#PBS -l nodes=4:ppn=2
#PBS -N aniruddha_job
#PBS -S /bin/bash
cd $HOME/NPB/NPB3.2/NPB3.2-MPI/bin/OMPI/EP/A/4_NO
/home/osu4005/openmpi/openmpi_NO/bin/mpiexec --bynode --prefix
/home/osu4005/openmpi/openmpi_NO --mca btl mvapi -n 4
LD_LIBRARY_PATH=/usr/local/intel-8.0-2004
I don't think the LD_LIBRARY_PATH belongs on our command line -
shouldn't you do that before calling mpiexec?
Ralph
Aniruddha Shet wrote:
#PBS -l walltime=0:01:00
#PBS -l nodes=4:ppn=2
#PBS -N aniruddha_job
#PBS -S /bin/bash
cd $HOME/NPB/NPB3.2/NPB3.2-MPI/bin/OMPI/EP/A/4_NO
/home/osu4005
This is all I get. No core dump, no nothing :(
Do you get any more of an error message than that? Did the process dump
core, and if so, what does a backtrace show?
-Original Message-
From: users-boun...@open-mpi.org
[mailto:users-boun...@open-mpi.org] On Behalf Of Jeffrey B. Lay
On Apr 12, 2006, at 9:09 AM, liuli...@stat.ohio-state.edu wrote:
We have a Mac network running xgrid and we have successfully installed
mpi. We want to run a parallell version of mrbayes. It did not have
any
problem when we compiled mrbayes using mpicc. But when we tried to
run the
compiled
Brian,
It worked when I used the latest version of Mrbayes. Thanks. By the way,
do you have any idea to submit an ompi job on xgrid? Thanks again.
Liang
> On Apr 12, 2006, at 9:09 AM, liuli...@stat.ohio-state.edu wrote:
>
>> We have a Mac network running xgrid and we have successfully installed
>
I'm running on a cluster with mvapi. I built with mvapi and it runs,
but I want to make absolutely sure that I'm using the IB interconnect
and nothing else. How can I tell specifically what interconnect I'm
using when I run.
Bernie Borenstein
The Boeing Company
Hi Bernie,
You may specify which BTLs to use at runtime using an mca parameter:
mpirun -np 2 -mca btl self,mvapi ./my_app
This specifies to only use self (loopback) and mvapi.
You may want to also use sm (shared memory) if you have multi-core or
multi-proc.. such as:
mpirun -np 2 -mca btl s
10 matches
Mail list logo