Hello all,
I have been attempting to compile mpptest on my nodes in vain. Here is
my current setup:
Openmpi is in "$HOME/openmpi_`uname -m`" which translates to
"/export/home/eric/openmpi_i686/". I tried the following approaches (you can
see some of these were out of desperation):
CFL
I am trying to do some simple fortran MPI examples to verify I have a good
installation
of OpenMPI and I have a distributed program that calculates PI. It seems to
compile
and work fine with 1.1.4 but whan I compile and run the same program with 1.2b3
I get a bunch of the same ORTE errors and the
On Feb 13, 2007, at 4:29 PM, Steven A. DuChene wrote:
I discovered the hard way that there are openmpi profile.d scripts
that get
packaged into openmpi rpm files. The reason this became a painful
issue
for our cluster is that it seems the csh profile.d script that gets
installed
with the op
What platform / operating system was this with?
Brian
On Feb 15, 2007, at 3:43 PM, Steven A. DuChene wrote:
I am trying to do some simple fortran MPI examples to verify I have
a good installation
of OpenMPI and I have a distributed program that calculates PI. It
seems to compile
and work fi
On Feb 15, 2007, at 5:43 PM, Steven A. DuChene wrote:
I am trying to do some simple fortran MPI examples to verify I have
a good installation
of OpenMPI and I have a distributed program that calculates PI. It
seems to compile
and work fine with 1.1.4 but whan I compile and run the same
prog
I think you want to add $HOME/openmpi_`uname -m`/lib to your
LD_LIBRARY_PATH. This should allow executables created by mpicc (or
any derivation thereof, such as extracting flags via showme) to find
the Right shared libraries.
Let us know if that works for you.
FWIW, we do recommend using
Hi Jeff,
Thanks for your response, I eventually figured it out, here is the
only way I got mpptest to compile:
export LD_LIBRARY_PATH="$HOME/openmpi_`uname -m`/lib"
CC="$HOME/openmpi_`uname -m`/bin/mpicc" ./configure
--with-mpi="$HOME/openmpi_`uname -m`"
And, yes I know I should use th
As long as mpicc is working, try configuring mpptest as
mpptest/configure MPICC=/bin/mpicc
or
mpptest/configure --with-mpich=
A.Chan
On Thu, 15 Feb 2007, Eric Thibodeau wrote:
> Hi Jeff,
>
> Thanks for your response, I eventually figured it out, here is the
> only way I got mpptest to
Brian:
These are dual proc AMD Opteron systems running RHEL4u2
-Original Message-
>From: Brian Barrett
>Sent: Feb 15, 2007 4:02 PM
>To: "Steven A. DuChene" , Open MPI Users
>
>Subject: Re: [OMPI users] ORTE errors on simple fortran program with 1.2b3
>
>What platform / operating system w
Jeff:
I built openmpi-1.2b4r13658 and tried the test again and my example fortran
program
did indeed work fine with that release.
Thanks
-Original Message-
>From: Jeff Squyres
>Sent: Feb 15, 2007 4:09 PM
>To: "Steven A. DuChene" , Open MPI Users
>
>Subject: Re: [OMPI users] ORTE errors
Good point, this may be affecting overall performance for openib+gm.
But I didn't see any performance improvement for gm+tcp over just
using gm (and there's definitely no memory bandwidth limitation
there).
I wouldn't expect you to see any benefit with GM+TCP, the overhead
costs of TCP are so
11 matches
Mail list logo