Re: [OMPI users] SilverStorm IB

2006-04-13 Thread Troy Telford
On Apr 12, 2006, at 8:59 PM, Jeff Squyres (jsquyres) wrote: FWIW, the "has a different size..." errors means that you may not have been linking against the shared libraries that you thought you were. This typically means that the executable expected to find an object in a library of a giv

Re: [OMPI users] Error while loading shared libraries

2006-04-13 Thread Aniruddha Shet
The error message is coming from all nodes. I explicitly add the path of Intel shared library to LD_LIBRARY_PATH on my mpiexec command, in addition to it being added in my shell startup file. I make a batch request to PBS. The Intel shared library is on a common file system across compute nod

Re: [OMPI users] Error while loading shared libraries

2006-04-13 Thread Jeff Squyres (jsquyres)
If you are using PBS, the environment of where you ran "qsub" is automatically copied out to the first node in your job where your script is run. Can you send your torque job script? > -Original Message- > From: users-boun...@open-mpi.org > [mailto:users-boun...@open-mpi.org] On Behalf

Re: [OMPI users] Error while loading shared libraries

2006-04-13 Thread Aniruddha Shet
#PBS -l walltime=0:01:00 #PBS -l nodes=4:ppn=2 #PBS -N aniruddha_job #PBS -S /bin/bash cd $HOME/NPB/NPB3.2/NPB3.2-MPI/bin/OMPI/EP/A/4_NO /home/osu4005/openmpi/openmpi_NO/bin/mpiexec --bynode --prefix /home/osu4005/openmpi/openmpi_NO --mca btl mvapi -n 4 LD_LIBRARY_PATH=/usr/local/intel-8.0-2004

Re: [OMPI users] Error while loading shared libraries

2006-04-13 Thread Ralph Castain
I don't think the LD_LIBRARY_PATH belongs on our command line - shouldn't you do that before calling mpiexec? Ralph Aniruddha Shet wrote: #PBS -l walltime=0:01:00 #PBS -l nodes=4:ppn=2 #PBS -N aniruddha_job #PBS -S /bin/bash cd $HOME/NPB/NPB3.2/NPB3.2-MPI/bin/OMPI/EP/A/4_NO /home/osu4005

Re: [OMPI users] Problem with 1.0.2 and PGI 6.0

2006-04-13 Thread Jeffrey B. Layton
This is all I get. No core dump, no nothing :( Do you get any more of an error message than that? Did the process dump core, and if so, what does a backtrace show? -Original Message- From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Jeffrey B. Lay

Re: [OMPI users] running a job problem

2006-04-13 Thread Brian Barrett
On Apr 12, 2006, at 9:09 AM, liuli...@stat.ohio-state.edu wrote: We have a Mac network running xgrid and we have successfully installed mpi. We want to run a parallell version of mrbayes. It did not have any problem when we compiled mrbayes using mpicc. But when we tried to run the compiled

Re: [OMPI users] running a job problem

2006-04-13 Thread liuliang
Brian, It worked when I used the latest version of Mrbayes. Thanks. By the way, do you have any idea to submit an ompi job on xgrid? Thanks again. Liang > On Apr 12, 2006, at 9:09 AM, liuli...@stat.ohio-state.edu wrote: > >> We have a Mac network running xgrid and we have successfully installed >

[OMPI users] how can I tell for sure that I'm using mvapi

2006-04-13 Thread Borenstein, Bernard S
I'm running on a cluster with mvapi. I built with mvapi and it runs, but I want to make absolutely sure that I'm using the IB interconnect and nothing else. How can I tell specifically what interconnect I'm using when I run. Bernie Borenstein The Boeing Company

Re: [OMPI users] how can I tell for sure that I'm using mvapi

2006-04-13 Thread Galen M. Shipman
Hi Bernie, You may specify which BTLs to use at runtime using an mca parameter: mpirun -np 2 -mca btl self,mvapi ./my_app This specifies to only use self (loopback) and mvapi. You may want to also use sm (shared memory) if you have multi-core or multi-proc.. such as: mpirun -np 2 -mca btl s