Thanks for the response, I was hoping I'd just messed up something simple.
Your advice took care of my issues.
On 27/03/07 14:15 -0400, George Bosilca wrote:
> Justin,
>
> There is no GM MTL. Therefore, the first mpirun allow the use of
> every available BTL, while the second one don't allow
Justin,
There is no GM MTL. Therefore, the first mpirun allow the use of
every available BTL, while the second one don't allow intra-node
communications or self. The correct mpirun command line should be:
mpirun -np 4 --mca btl gm,self ...
george.
On Mar 27, 2007, at 12:18 PM, Justin Br
Well, mpich2 and mvapich2 are working smoothly for my app. mpich2 under
gige is also giving ~2X the performance of openmpi during the working
cases for openmpi. After the paper deadline, I'll attempt to package up
a simple test case and send it to the list.
Thanks!
-Mike
Mike Houston wrote
Hello Mr. Van der Vlies,
We are currently looking into this problem and will send out an email as
soon as we recognize something and fix it.
Thank you,
> Subject: Re: [OMPI users] Memory leak in openmpi-1.2?
> Date: Tue, 27 Mar 2007 13:58:15 +0200
> From: Bas van der Vlies
> Reply-To: Open MPI
Having a user who requires some of the features of gfortran in 4.1.2, I
recently began building a new image. The issue is that "-mca btl gm" fails
while "-mca mtl gm" works. I have not yet done any benchmarking, as I was
wondering if the move to mtl is part of the upgrade. Below are the package
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
On
> Behalf Of Jeff Squyres
>
> I notice that you are using the "medium" sized F90 bindings. Do
> these FAQ entries help?
>
> http://www.open-mpi.org/faq/?category=mpi-apps#f90-mpi-slow-compiles
>
I tried the trunk version with "--mca btl tcp,self". Essentially system time
changes to idle time, since empty polling is being replaced by blocking
(right?). Page faults go to 0 though.
It is interesting since you can see what is going on now, with distinct
phases of user time and idle time (slee
I notice that you are using the "medium" sized F90 bindings. Do
these FAQ entries help?
http://www.open-mpi.org/faq/?category=mpi-apps#f90-mpi-slow-compiles
http://www.open-mpi.org/faq/?category=building#f90-bindings-slow-compile
On Mar 27, 2007, at 2:21 AM, de Almeida, Valmor F. wrote:
H
Bas van der Vlies wrote:
Hello,
We are testing openmpi version 1.2 on Debian etch with openib. Some of
our users are using scalapack/blacs that are running for a long time use
a lot of mpi_comm functions. we have made a small C example that test if
the mpi can handle this situation (see at
Hello,
We are testing openmpi version 1.2 on Debian etch with openib. Some of
our users are using scalapack/blacs that are running for a long time use
a lot of mpi_comm functions. we have made a small C example that test if
the mpi can handle this situation (see attach file). When we run thi
Hello,
I am using mpic++ to create a program that combines c++ and f90
libraries. The libraries are created with mpic++ and mpif90. OpenMPI-1.2
was built using gcc-4.1.1. (below follows the output of ompi_info. The
final linking stage takes quite a long time compared to the creation of
the librar
11 matches
Mail list logo