Try mpi_real8 for the type in allreduce

On 7/26/10, Hugo Gagnon <sourceforge.open...@user.fastmail.fm> wrote:
> Hello,
>
> When I compile and run this code snippet:
>
>   1 program test
>   2
>   3         use mpi
>   4
>   5         implicit none
>   6
>   7         integer :: ierr, nproc, myrank
>   8         integer, parameter :: dp = kind(1.d0)
>   9         real(kind=dp) :: inside(5), outside(5)
>  10
>  11         call mpi_init(ierr)
>  12         call mpi_comm_size(mpi_comm_world, nproc, ierr)
>  13         call mpi_comm_rank(mpi_comm_world, myrank, ierr)
>  14
>  15         inside = (/ 1, 2, 3, 4, 5 /)
>  16         call mpi_allreduce(inside, outside, 5, mpi_double_precision,
>  mpi_sum, mpi_comm_world, ierr)
>  17
>  18         print*, myrank, inside
>  19         print*, outside
>  20
>  21         call mpi_finalize(ierr)
>  22
>  23 end program test
>
> I get the following error, with say 2 processors:
>
> forrtl: severe (174): SIGSEGV, segmentation fault occurred
> Image              PC                Routine            Line
> Source
> libmpi.0.dylib     00000001001BB4B7  Unknown               Unknown
> Unknown
> libmpi_f77.0.dyli  00000001000AF046  Unknown               Unknown
> Unknown
> a.out              0000000100000CE2  _MAIN__                    16
> test.f90
> a.out              0000000100000BDC  Unknown               Unknown
> Unknown
> a.out              0000000100000B74  Unknown               Unknown
> Unknown
> forrtl: severe (174): SIGSEGV, segmentation fault occurred
> Image              PC                Routine            Line
> Source
> libmpi.0.dylib     00000001001BB4B7  Unknown               Unknown
> Unknown
> libmpi_f77.0.dyli  00000001000AF046  Unknown               Unknown
> Unknown
> a.out              0000000100000CE2  _MAIN__                    16
> test.f90
> a.out              0000000100000BDC  Unknown               Unknown
> Unknown
> a.out              0000000100000B74  Unknown               Unknown
> Unknown
>
> on my iMac having compiled OpenMPI with ifort according to:
> http://software.intel.com/en-us/articles/performance-tools-for-software-developers-building-open-mpi-with-the-intel-compilers/
>
> Note that the above code snippet runs fine on my school parallel cluster
> where ifort+intelmpi is installed.
> Is there something special about OpenMPI's MPI_Allreduce function call
> that I should be aware of?
>
> Thanks,
> --
>   Hugo Gagnon
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

-- 
Sent from my mobile device

David Zhang
University of California, San Diego

Reply via email to