Clarification on this -- my earlier response wasn't quite right...
We actually do not provide F90 bindings for MPI_Reduce (and several
other collectives) because they have 2 user-provided buffers. This
means that for N intrinsic types, there are N^2 possible overloads for
this function (because F90 is strongly typed -- there's actually even
more than N^2 because of all the array dimension possibilities).
Although Open MPI can generate all those F90 interfaces, we have not
yet run into a compiler that does not seg fault when we try to have
that many subroutines in a single module.
Hence, we don't generate subroutines for those functions -- that's what
you were seeing (you could compile, but you couldn't link). The
problem was that we were actually generating the interfaces -- which
meant your compiler was looking for a F90 subroutine, but we didn't
provide it.
The correct solution was for us to remove the F90 MPI_Reduce (and
friend) interfaces so that you'll automatically fall down through and
utilize the F77 bindings for MPI_Reduce (unfortunately, there's no type
checking, but at least it compiles/links/runs). I've committed this on
both the trunk and the branch; it'll be in tomorrow's nightly tarballs.
This problem is actually a well-known deficiency in the MPI F90
bindings; it's been mentioned a few places, probably the most recent of
which is in our paper proposing new MPI Fortran bindings:
http://www.open-mpi.org/papers/euro-pvmmpi-2005-fortran
On Nov 10, 2005, at 8:17 AM, Jeff Squyres wrote:
Great Leaping Lizards, Batman!
Unbelievably, the MPI_Reduce interfaces were left out. I'm going to
go a complete F90 audit right now to ensure that no other interfaces
were unintentionally excluded; I'll commit a fix today.
Thanks for catching this!
On Nov 9, 2005, at 8:15 PM, Brent LEBACK wrote:
I'm building rc4 with a soon-to-be released pgf90. mpicc and mpif77
both seem okay. When I compile
with mpif90 I get:
: In function `MAIN_':
: undefined reference to `mpi_reduce0dr8_'
pgf90-Fatal-linker completed with exit code 1
Your problem or mine? I see these type extensions for bcast and
various
sends and receives in libmpi_f90.a,
but nothing for mpi_reduce. Where should I be looking?
This is on an opteron cluster, SLES 9.
Thanks.
- Brent
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/