Sorry for the delay on this -- is this still the case with the OMPI trunk?

We think we finally have all the issues solved with MPI_ABORT on the trunk.



On Oct 16, 2006, at 8:29 AM, Åke Sandgren wrote:

On Mon, 2006-10-16 at 10:13 +0200, Åke Sandgren wrote:
On Fri, 2006-10-06 at 00:04 -0400, Jeff Squyres wrote:
On 10/5/06 2:42 PM, "Michael Kluskens" <mk...@ieee.org> wrote:

System: BLACS 1.1p3 on Debian Linux 3.1r3 on dual-opteron, gcc 3.3.5,
Intel ifort 9.0.32 all tests with 4 processors (comments below)

OpenMPi 1.1.1 patched and OpenMPI 1.1.2 patched:
C & F tests: no errors with default data set. F test slowed down
in the middle of the tests.

Good.  Can you expand on what you mean by "slowed down"?

Lets add some more data to this...
BLACS 1.1p3
Ubuntu Dapper 6.06
dual opteron
gcc 4.0
gfortran 4.0 (for both f77 and f90)
standard tests with 4 tasks on one node (i.e. 2 tasks per cpu)

OpenMPI 1.1.2rc3
The tests comes to a complete standstill at the integer bsbr tests
It consumes cpu all the time but nothing happens.

Actually if i'm not too inpatient i will progress but VERY slowly.
A complete run of the blacstest takes +30min cpu-time...
From the bsbr tests and onwards everything takes "forever".

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems


Reply via email to