Hi Jeff,
I had noticed the the library name switched but thanks for pointing it
out
still ;) As for the compilation route, I chose to use mpicc as the preferred
approach and indeed let the wrapper do the work.
FWIW, I got HPCC running, now to find a nice way to sort through all
Note that George listed the v1.2 OMPI libraries (-lopen-rte and -
lopenpal) -- the v.1.1.x names are slightly different (-lorte and -
lopal). We had to change the back-end library names between v1.1 and
v1.2 because someone else out in the Linux community uses "libopal".
I typically prefer u
Hi George,
Would you say this is preferred to changing the default CC + LINKER?
Eric
Le mercredi 21 février 2007 12:04, George Bosilca a écrit :
> You should use something like this
> MPdir = /usr/local/mpi
> MPinc = -I$(MPdir)/include
> MPlib = -L$(MPdir)/lib -lmpi -lopen-rte -lopen-pal
>
You should use something like this
MPdir = /usr/local/mpi
MPinc = -I$(MPdir)/include
MPlib = -L$(MPdir)/lib -lmpi -lopen-rte -lopen-pal
george.
On Feb 21, 2007, at 11:35 AM, Eric Thibodeau wrote:
Hello all,
As we all know, compiling OpenMPI is not a matter of adding -lmpi
(http://www.ope
Thanks Laurent, I will try your proposed settings.
Note that I didn't want to use CC= and LINKER= since I dont know the
probable impacts on the rest of the benchmarks...hmm...though this IS a
clustering benchamrk. Also note that I wasn't trying to compile for MPICH, I
merely copied the
Hello,
I believe that you are trying to use mpich, not openmpi (libmpich.a).
Personnally, I've compiling HPCC on AIX IBM with OpenMPI with theses lines:
# --
# - Message Passing library (MPI) --