Hi Hugo, David, Jeff, Terry, Anton, list
I suppose maybe we're guessing that somehow on Hugo's iMac
MPI_DOUBLE_PRECISION may not have as many bytes as dp = kind(1.d0),
hence the segmentation fault on MPI_Allreduce.
Question:
Is there a simple way to check the number of bytes associated to each
MPI basic type of OpenMPI on a specific machine (or machine+compiler)?
Something that would come out easily, say, from ompi_info?
The information I get is C-centered: :(
$ ompi_info --all |grep -i double
C double size: 8
C double align: 8
If not possible yet, please consider it a feature request ... :)
(Or is this perhaps against the "opacity" in the MPI standard?)
I poked around on the OpenMPI include directory to no avail.
MPI_DOUBLE_PRECISION is defined as a constant (it is 17 here)
which doesn't seem to be related to the actual size in bytes.
I found some stuff on my OpenMPI config.log, though:
$ grep -i double_precision config.log
... (tons of lines)
ompi_cv_f77_alignment_DOUBLE_PRECISION=8
ompi_cv_f77_have_DOUBLE_PRECISION=yes
ompi_cv_f77_sizeof_DOUBLE_PRECISION=8
ompi_cv_f90_have_DOUBLE_PRECISION=yes
ompi_cv_f90_sizeof_DOUBLE_PRECISION=8
ompi_cv_find_type_DOUBLE_PRECISION=double
OMPI_SIZEOF_F90_DOUBLE_PRECISION='8'
#define OMPI_HAVE_FORTRAN_DOUBLE_PRECISION 1
#define OMPI_SIZEOF_FORTRAN_DOUBLE_PRECISION 8
#define OMPI_ALIGNMENT_FORTRAN_DOUBLE_PRECISION 8
#define ompi_fortran_double_precision_t double
#define OMPI_HAVE_F90_DOUBLE_PRECISION 1
Thank you,
Gus Correa
David Zhang wrote:
Try mpi_real8 for the type in allreduce
On 7/26/10, Hugo Gagnon <sourceforge.open...@user.fastmail.fm> wrote:
Hello,
When I compile and run this code snippet:
1 program test
2
3 use mpi
4
5 implicit none
6
7 integer :: ierr, nproc, myrank
8 integer, parameter :: dp = kind(1.d0)
9 real(kind=dp) :: inside(5), outside(5)
10
11 call mpi_init(ierr)
12 call mpi_comm_size(mpi_comm_world, nproc, ierr)
13 call mpi_comm_rank(mpi_comm_world, myrank, ierr)
14
15 inside = (/ 1, 2, 3, 4, 5 /)
16 call mpi_allreduce(inside, outside, 5, mpi_double_precision,
mpi_sum, mpi_comm_world, ierr)
17
18 print*, myrank, inside
19 print*, outside
20
21 call mpi_finalize(ierr)
22
23 end program test
I get the following error, with say 2 processors:
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image PC Routine Line
Source
libmpi.0.dylib 00000001001BB4B7 Unknown Unknown
Unknown
libmpi_f77.0.dyli 00000001000AF046 Unknown Unknown
Unknown
a.out 0000000100000CE2 _MAIN__ 16
test.f90
a.out 0000000100000BDC Unknown Unknown
Unknown
a.out 0000000100000B74 Unknown Unknown
Unknown
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image PC Routine Line
Source
libmpi.0.dylib 00000001001BB4B7 Unknown Unknown
Unknown
libmpi_f77.0.dyli 00000001000AF046 Unknown Unknown
Unknown
a.out 0000000100000CE2 _MAIN__ 16
test.f90
a.out 0000000100000BDC Unknown Unknown
Unknown
a.out 0000000100000B74 Unknown Unknown
Unknown
on my iMac having compiled OpenMPI with ifort according to:
http://software.intel.com/en-us/articles/performance-tools-for-software-developers-building-open-mpi-with-the-intel-compilers/
Note that the above code snippet runs fine on my school parallel cluster
where ifort+intelmpi is installed.
Is there something special about OpenMPI's MPI_Allreduce function call
that I should be aware of?
Thanks,
--
Hugo Gagnon
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users