Hey Brock,
Nope, no error messages during the execution. Plus, there were no errors
when I built Open MPI, so I guess I am good.
Thanks for the info. I appreciate it.
Jeff F. Pummill
University of Arkansas
Fayetteville, Arkansas 72701
(479) 575 - 4590
http://hpc.uark.edu
Brock Palen wro
You will know if it doesn't, You will have a bunch of messages about
not finding a ib card and that openMPI is falling back to another
transport.
Do all your nodes have infiniband?
Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985
On Aug 23, 2007, at 9:27 PM, Jeff P
I have successfully compiled Open MPI 1.2.3 against Intel 8.1 compiler
suite and old (3 years) mvapi stack using the following configure:
configure --prefix=/nfsutil/openmpi-1.2.3
--with-mvapi=/usr/local/topspin/ CC=icc CXX=icpc F77=ifort FC=ifort
Do I need to assign any particular flags to t
Hi Josh,
I am not an expert in this area of the code, but I'll give it a shot.
(I assume you are using linux due to your email address) When using the memory
manager (which is the default on linux), we wrap malloc/realloc/etc with
ptmalloc2 (which is the same allocator used in glibc 2.3.x).
W
Brian Barrett wrote:
> On Aug 23, 2007, at 4:33 AM, Bernd Schubert wrote:
>
>> I need to compile a benchmarking program and absolutely so far do
>> not have
>> any experience with any MPI.
>> However, this looks like a general open-mpi problem, doesn't it?
>>
>> bschubert@lanczos MPI_IO> make
>>
On Aug 23, 2007, at 4:33 AM, Bernd Schubert wrote:
I need to compile a benchmarking program and absolutely so far do
not have
any experience with any MPI.
However, this looks like a general open-mpi problem, doesn't it?
bschubert@lanczos MPI_IO> make
cp ../globals.f90 ./; mpif90 -O2 -c ../glo
I have found that the infiniserv MPI that comes with our IB software
distribution tracks the same behaviour as gcc (releaseing memory on
realloc). I have also found that building openmpi with
--without-memory-manager makes openmpi track the same behaviour as
glibc. I'm guessing that there is a b
Hi,
I need to compile a benchmarking program and absolutely so far do not have
any experience with any MPI.
However, this looks like a general open-mpi problem, doesn't it?
bschubert@lanczos MPI_IO> make
cp ../globals.f90 ./; mpif90 -O2 -c ../globals.f90
mpif90 -O2 -c main.f90
mpif90 -O2 -c reade