Some MPI libraries (including OMPI and IMPI) hook the system memory
management routines like 'malloc' and 'free' (used behind the scenes by
Fortran on Unix). This is usually performed in order to manage memory
registration for RDMA-based networks like InfiniBand. I would guess that
Open MPI installs these hooks at the time when MPI_INIT is called and that's
why you see the problem after MPI_INIT was called but not if the call is
commented out.

Could you try to run your serial program in Valgrind and see if it reports
any erroneous memory access attempts? It could be that GCC's implementation
of the automatic allocation is broken and that OMPI's intervention in the
memory management process only exposes an already existing problem.

Kind regards,
Hristo

> -----Original Message-----
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
> On Behalf Of Stefan Mauerberger
> Sent: Monday, January 14, 2013 12:08 PM
> To: us...@open-mpi.org
> Subject: Re: [OMPI users] Initializing OMPI with invoking the array
> constructor on Fortran derived types causes the executable to crash
> 
> Well, I missed to emphasize one thing: It is my intension to exploit
F2003's
> lhs-(re)allocate feature. Meaning, it is totally legal in F03 to write
something
> like that:
> integer, allocatable :: array(:)
> array = [ 1,2,3,4 ]
> array = [ 1 ]
> where 'array' gets automatically (re)allocated. One more thing I should
> mention: In case 'array' is manually allocate, everything is fine.
> 
> Ok, lets do a little case study and make my suggested minimal example a
little
> more exhaustive:
> PROGRAM main
> 
>     IMPLICIT NONE
>     !INCLUDE 'mpif.h'
> 
>     INTEGER :: ierr
> 
>     TYPE :: test_typ
>         REAL, ALLOCATABLE :: a(:)
>     END TYPE
> 
>     TYPE(test_typ) :: xx, yy
>     TYPE(test_typ), ALLOCATABLE :: conc(:)
> 
>     !CALL mpi_init( ierr )
> 
>     xx = test_typ( a=[1.0] )
>     yy = test_typ( a=[2.0,1.0] )
> 
>     conc = [ xx, yy ]
> 
>     WRITE(*,*) SIZE(conc)
> 
>     !CALL mpi_finalize( ierr )
> 
> END PROGRAM main
> Note: For the beginning all MPI-stuff is commented out; xx and yy are
> initialized and their member-variable 'a' is allocated.
> 
> For now, assume it as purely serial. That piece of code complies and runs
> properly with:
>  * gfortran 4.7.1, 4.7.2 and 4.8.0 (experimental)
>  * ifort 12.1 and 13.0 (-assume realloc_lhs)
>  * nagfort 5.3
> On the contrary it terminates, throwing a segfault, with
>  * pgfortran 12.9
> Well, for the following lets simply drop PGI. In addition, according to
'The
> Fortran 2003 Handbook' published by Springer in 2009, the usage of the
array
> constructor [...] is appropriate and valid.
> 
> As a second step lets try to compile and run it invoking OMPI, just
considering
> INCLUDE 'mpif.h':
>  * gfortran: all right
>  * ifort: all right
>  * nagfor: all right
> 
> Finally, lets initialize MPI by calling MPI_Init() and MPI_Finalize():
>  * gfortran + OMPI: *** glibc detected *** ./a.out: free(): invalid
pointer ...
>  * gfortran + Intel-MPI: *** glibc detected *** ./a.out: free(): invalid
pointer
> ...
>  * ifort + OMPI: all right
>  * nagfor + OMPI: all right (-thread_safe)
> 
> Well, you are right, this is a very strong indication to blame gfortran
for that!
> However, it gets even more confusing. Instead of linking against OMPI, the
> following results are obtained by invoking IBM's MPI
> implementation:
>  * gfortran + IBM-MPI: all right
>  * ifort + IBM-MPI: all right
> Isn't that weired?
> 
> Any suggestions? Might it be useful to submit a bug-report to GCC
> developers?
> 
> Cheers,
> Stefan
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Hristo Iliev, Ph.D. -- High Performance Computing
RWTH Aachen University, Center for Computing and Communication
Rechen- und Kommunikationszentrum der RWTH Aachen
Seffenter Weg 23,  D 52074  Aachen (Germany)

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to