Ricardo,
I checked on Linux and on Mac OS X 10.5.7 with the fortran compilers
from (hpc.sourceforge.net) and I get the correct answer. As you only
report problems on Mac OS X I wonder if the real source of the problem
is not coming from a library mismatch. As you know, Open MPI is
bundled in Leopard. We had problems in the past if the user install
their own version, if the paths are not correctly set.
Let's try to understand what the problem is on your system. First
please compile your version of Open MPI by adding --enable-mpirun-
prefix-by-default to the configure line. Then once everything is
installed, compile a simple application (inplace.F is a good example),
do a "otool -L a.out" and send out the output.
Thanks,
george.
On Jul 28, 2009, at 10:15 , Ricardo Fonseca wrote:
Hi George
Thanks for the input. This might be an OS specific problem: I'm
running Mac OS X 10.5.7, and this problem appears in openmpi
versions 1.3.2, 1.3.3 and 1.4a1r21734, using Intel Ifort Compiler
11.0 and 11.1 (and also g95 + 1.3.2). I haven't tried older
versions. Also, I'm running on a single machine:
zamb$ mpif90 inplace_test.f90
zamb$ mpirun -np 2 ./a.out
Result:
2.000000 2.000000 2.000000 2.000000
I've tryed the same code under Linux (openmpi-1.3.3 + gfortran) and
it works (and also other platforms / MPIs ).
Can you think of some --mca options I should try? (or any other
ideas...)
Cheers,
Ricardo
---
Prof. Ricardo Fonseca
GoLP - Grupo de Lasers e Plasmas
Instituto de Plasmas e Fusão Nuclear
Instituto Superior Técnico
Av. Rovisco Pais
1049-001 Lisboa
Portugal
tel: +351 21 8419202
fax: +351 21 8464455
web: http://cfp.ist.utl.pt/golp/
On Jul 28, 2009, at 4:24 , users-requ...@open-mpi.org wrote:
Message: 4
Date: Mon, 27 Jul 2009 17:13:23 -0400
From: George Bosilca <bosi...@eecs.utk.edu>
Subject: Re: [OMPI users] MPI_IN_PLACE in Fortran with MPI_REDUCE /
MPI_ALLREDUCE
To: Open MPI Users <us...@open-mpi.org>
Message-ID: <966a51c3-a15f-425b-a6b0-81221033c...@eecs.utk.edu>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
Ricardo,
I can't reproduce your problem with the latest version (trunk
r21734).
If I run the provided program on two nodes I get the following
answer.
[***]$ mpif77 inplace.f -o inplace -g
[***]$ mpirun -bynode -np 2 ./inplace
Result:
3.0000000 3.0000000 3.0000000 3.0000000
This seems correct and in sync with the C answer.
george.
On Jul 27, 2009, at 09:42 , Ricardo Fonseca wrote:
program inplace
use mpi
implicit none
integer :: ierr, rank, rsize, bsize
real, dimension( 2, 2 ) :: buffer, out
integer :: rc
call MPI_INIT(ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD, rsize, ierr)
buffer = rank + 1
bsize = size(buffer,1) * size(buffer,2)
if ( rank == 0 ) then
call mpi_reduce( MPI_IN_PLACE, buffer, bsize, MPI_REAL, MPI_SUM,
0, MPI_COMM_WORLD, ierr )
else
call mpi_reduce( buffer, 0, bsize, MPI_REAL, MPI_SUM,
0, MPI_COMM_WORLD, ierr )
endif
! use allreduce instead
! call mpi_allreduce( MPI_IN_PLACE, buffer, bsize, MPI_REAL,
MPI_SUM, MPI_COMM_WORLD, ierr )
if ( rank == 0 ) then
print *, 'Result:'
print *, buffer
endif
rc = 0
call mpi_finalize( rc )
end program
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users