Well, I missed to emphasize one thing: It is my intension to exploit
F2003's lhs-(re)allocate feature. Meaning, it is totally legal in F03 to
write something like that:
integer, allocatable :: array(:)
array = [ 1,2,3,4 ]
array = [ 1 ]
where 'array' gets automatically (re)allocated. One more thing I should
mention: In case 'array' is manually allocate, everything is fine.

Ok, lets do a little case study and make my suggested minimal example a
little more exhaustive:
PROGRAM main

    IMPLICIT NONE 
    !INCLUDE 'mpif.h'

    INTEGER :: ierr 

    TYPE :: test_typ
        REAL, ALLOCATABLE :: a(:)
    END TYPE

    TYPE(test_typ) :: xx, yy
    TYPE(test_typ), ALLOCATABLE :: conc(:)

    !CALL mpi_init( ierr )

    xx = test_typ( a=[1.0] )
    yy = test_typ( a=[2.0,1.0] )

    conc = [ xx, yy ]

    WRITE(*,*) SIZE(conc)

    !CALL mpi_finalize( ierr )

END PROGRAM main 
Note: For the beginning all MPI-stuff is commented out; xx and yy are
initialized and their member-variable 'a' is allocated. 

For now, assume it as purely serial. That piece of code complies and
runs properly with: 
 * gfortran 4.7.1, 4.7.2 and 4.8.0 (experimental)
 * ifort 12.1 and 13.0 (-assume realloc_lhs)
 * nagfort 5.3
On the contrary it terminates, throwing a segfault, with
 * pgfortran 12.9
Well, for the following lets simply drop PGI. In addition, according to
'The Fortran 2003 Handbook' published by Springer in 2009, the
usage of the array constructor [...] is appropriate and valid. 

As a second step lets try to compile and run it invoking OMPI, just
considering INCLUDE 'mpif.h':
 * gfortran: all right 
 * ifort: all right
 * nagfor: all right

Finally, lets initialize MPI by calling MPI_Init() and MPI_Finalize():
 * gfortran + OMPI: *** glibc detected *** ./a.out: free(): invalid
pointer ...
 * gfortran + Intel-MPI: *** glibc detected *** ./a.out: free(): invalid
pointer ...
 * ifort + OMPI: all right 
 * nagfor + OMPI: all right (-thread_safe)

Well, you are right, this is a very strong indication to blame gfortran
for that!  However, it gets even more confusing. Instead of linking
against OMPI, the following results are obtained by invoking IBM's MPI
implementation:
 * gfortran + IBM-MPI: all right
 * ifort + IBM-MPI: all right 
Isn't that weired? 

Any suggestions? Might it be useful to submit a bug-report to GCC
developers? 

Cheers, 
Stefan 


Reply via email to