try specifing -prefix in the command line
ex: mpirun -np 4 -prefix $MPIHOME ./app
Lenny.
On Sat, Aug 8, 2009 at 5:04 PM, Kenneth Yoshimoto wrote:
>
> I don't own these nodes, so I have to use them with
> whatever path setups they came with. In particular,
> my home directory has a different p
can this be related ?
http://www.open-mpi.org/faq/?category=building#build-qs22
On Sun, Aug 9, 2009 at 12:22 PM, Attila Börcs wrote:
> Hi Everyone,
>
> What the regular method of compiling and running mpi code on Cell Broadband
> ppu-gcc and spu-gcc?
>
>
> Regards,
>
> Attila Borcs
>
> __
By default coll framework scans all avaliable modules and sets the avaliable
functions with the highest priorities.
So, to use tuned collectives explicetly you can higher it's priority.
-mca coll_tuned_priority 100
p.s. Collective modules can have only partial set of avaliable functions,
for exampl
Michael Di Domenico writes:
> It a freshly reformatted cluster
> converting from solaris to linux. We also reset the bios settings
> with "load optimal defaults".
[Why?]
> Does anyone know which bios setting i
> changed to dump the BW?
Off-topic, and surely better on the Beowulf list, it's an
Hi All,
I've been trying to get torque pbs to work on my OS X 10.5.7 cluster
with openMPI (after finding that Xgrid was pretty flaky about
connections). I *think* this is an MPI problem (perhaps via operator
error!)
If I submit openMPI with:
#PBS -l nodes=2:ppn=8
mpirun MyProg
pbs
Hi Jody
We don't have Mac OS-X, but Linux, not sure if this applies to you.
Did you configure your OpenMPI with Torque support,
and pointed to the same library that provides the
Torque you are using (--with-tm=/path/to/torque-library-directory)?
Are you using the right mpirun? (There are so man
Umm...are you saying that your $PBS_NODEFILE contains the following:
xserve01.local np=8
xserve02.local np=8
If so, that could be part of the problem - it isn't the standard
notation we are expecting to see in that file. What Torque normally
provides is one line for each slot, so we would
On Aug 10, 2009, at 13:01 PM, Gus Correa wrote:
Hi Jody
We don't have Mac OS-X, but Linux, not sure if this applies to you.
Did you configure your OpenMPI with Torque support,
and pointed to the same library that provides the
Torque you are using (--with-tm=/path/to/torque-library-directory)
Hi Ralph,
On Aug 10, 2009, at 13:04 PM, Ralph Castain wrote:
Umm...are you saying that your $PBS_NODEFILE contains the following:
No, if I put cat $PBS_NODEFILE in the pbs script I get
xserve02.local
...
xserve02.local
xserve01.local
...
xserve01.local
each repeated 8 times. So that seems
On Aug 10, 2009, at 3:25 PM, Jody Klymak wrote:
Hi Ralph,
On Aug 10, 2009, at 13:04 PM, Ralph Castain wrote:
Umm...are you saying that your $PBS_NODEFILE contains the following:
No, if I put cat $PBS_NODEFILE in the pbs script I get
xserve02.local
...
xserve02.local
xserve01.local
...
xse
Hi Jody, list
See comments inline.
Jody Klymak wrote:
On Aug 10, 2009, at 13:01 PM, Gus Correa wrote:
Hi Jody
We don't have Mac OS-X, but Linux, not sure if this applies to you.
Did you configure your OpenMPI with Torque support,
and pointed to the same library that provides the
Torque yo
On Aug 10, 2009, at 14:39 PM, Ralph Castain wrote:
mpirun --display-allocation -pernode --display-map hostname
Ummm, hmm, this is embarassing, none of those command line arguments
worked, making me suspicious...
It looks like somehow I decided to build and run openMPI 1.1.5, or at
lea
No problem - yes indeed, 1.1.x would be a bad choice :-)
On Aug 10, 2009, at 3:58 PM, Jody Klymak wrote:
On Aug 10, 2009, at 14:39 PM, Ralph Castain wrote:
mpirun --display-allocation -pernode --display-map hostname
Ummm, hmm, this is embarassing, none of those command line arguments
w
Just to correct something said here.
You need to tell mpirun how many processes to launch,
regardless of whether you are using Torque or not.
This is not correct. If you don't tell mpirun how many processes to
launch, we will automatically launch one process for every slot in
your allocati
Thank you for the correction, Ralph.
I didn't know there was a (wise) default for the
number of processes when using Torque-enabled OpenMPI.
Gus Correa
Ralph Castain wrote:
Just to correct something said here.
You need to tell mpirun how many processes to launch,
regardless of whether you are
No problem - actually, that default works with any environment, not
just Torque
On Aug 10, 2009, at 4:37 PM, Gus Correa wrote:
Thank you for the correction, Ralph.
I didn't know there was a (wise) default for the
number of processes when using Torque-enabled OpenMPI.
Gus Correa
Ralph Casta
So,
mpirun --display-allocation -pernode --display-map hostname
gives me the output below. Simple jobs seem to run, but the MITgcm
does not, either under ssh or torque. It hangs at some early point in
execution before anything is written, so its hard for me to tell what
the error is. Co
Check your LD_LIBRARY_PATH - there is an earlier version of OMPI in
your path that is interfering with operation (i.e., it comes before
your 1.3.3 installation).
On Aug 10, 2009, at 7:38 PM, Klymak Jody wrote:
So,
mpirun --display-allocation -pernode --display-map hostname
gives me the o
On 10-Aug-09, at 6:44 PM, Ralph Castain wrote:
Check your LD_LIBRARY_PATH - there is an earlier version of OMPI in
your path that is interfering with operation (i.e., it comes before
your 1.3.3 installation).
H, The OS X faq says not to do this:
"Note that there is no need to add Open
Interesting! Well, I always make sure I have my personal OMPI build
before any system stuff, and I work exclusively on Mac OS-X:
rhc$ echo $PATH
/Library/Frameworks/Python.framework/Versions/Current/bin:/Users/rhc/
openmpi/bin:/Users/rhc/bin:/opt/local/bin:/usr/X11R6/bin:/usr/local/
bin:/opt/
On 10-Aug-09, at 8:03 PM, Ralph Castain wrote:
Interesting! Well, I always make sure I have my personal OMPI build
before any system stuff, and I work exclusively on Mac OS-X:
Note that I always configure with --prefix=somewhere-in-my-own-dir,
never to a system directory. Avoids this kind
21 matches
Mail list logo