On Aug 6, 2013, at 8:56 AM, Jeff Squyres wrote:
> You should be able to apply the attached patch to an OMPI 1.7.x tarball to
> add the MPI_Get_address implementation (which is a little difficult for you
> because you're installing via macports, but...).
Oops --- I neglected to attach the pat
On Aug 6, 2013, at 8:40 AM, Hugo Gagnon
wrote:
> Does this mean that for now I can just replace the MPI_Get_address calls
> to MPI_Address?
>
> I tried it and I got:
>
> $openmpif90 test.f90
> test.f90:11.32:
>
> call MPI_Address(a,address,ierr)
>1
> Error: The
Does this mean that for now I can just replace the MPI_Get_address calls
to MPI_Address?
I tried it and I got:
$openmpif90 test.f90
test.f90:11.32:
call MPI_Address(a,address,ierr)
1
Error: There is no specific subroutine for the generic 'mpi_address' at
(1)
On a
You found a bug!
Embarrissingly, we have MPI_Get_address prototyped in the Fortran module, but
it is not actually implemented (whereas MPI_Address is both prototyped and
implemented). Yow. :-(
This is just a minor oversight; there's no technical issue that prevents this
implementation. I've
The provided code sample is not correct, thus the real issue has nothing to do
with the amount of data to be handled by the MPI implementation. Scale the
amount to allocate down to 2^27 and the issue will still persist…
Your MPI_Allgatherv operation receives recvCount[i]*MPI_INT from each peer a