I can't give you a complete answer, but I think this is less an MPI question and more of a Fortran question.  The question is if you have a Fortran derived type, one of whose components is a POINTER, what does the data structure look like in linear memory?  I could imagine the answer is implementation dependent.  Anyhow, here is a sample, non-MPI, Fortran program that illustrates the question:

% cat b.f90
      type :: small
          integer, pointer :: array(:)
      end type small
      type(small) :: lala

      integer, pointer :: array(:)

      n = 20

      allocate( lala%array(n) )
      allocate(      array(n) )

      lala%array = (/ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20 /)
           array = (/ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20 /)

      call sub(lala)
      call sub(lala%array)
      call sub(     array)
end

subroutine sub(x)
      integer x(20)
      write(6,*) x
end
% f90 b.f90
% a.out
 599376 20 4 599372 1 20 -4197508 1 2561 0 33 0 0 0 0 0 0 0 0 0
 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
%

So, your model of 20 consecutive words does not work if you pass the derived type.  It does work if you pass the POINTER component.  This is with Oracle (Sun) Studio Fortran.  Again, I can imagine the behavior depends on the Fortran compiler.

I suspect what's going on is that a POINTER is a complicated data structure that has all sorts of metadata in it, but if you pass a POINTER the compiler knows to pass the thing you're pointing to rather than the metadata itself.

Jeremy Roberts wrote:
I'm trying to parallelize a Fortran code with rather complicated derived types full of pointer arrays.  When I build the MPI type for sending, all the static components are sent, but the pointer arrays are not (and retain initial values).  I imagine this has to do with memory addresses when creating the MPI struct, but I have no idea how to fix it.

I've included a simple code illustrating my issue below.  Any suggestions?

program mpi_struct_example
      use mpi
      implicit none
      ! declarations
      type :: small
          real, pointer :: array(:)
      end type small
      type(small) :: lala
      integer :: stat, counts(1), types(1), ierr, iam, n=0, MPI_SMALL
      integer (kind=MPI_ADDRESS_KIND) :: displs(1)
      ! initialize MPI and get my rank
      call MPI_INIT( ierr )
      call MPI_COMM_RANK( MPI_COMM_WORLD, iam, ierr )
      n = 20
      allocate( lala%array(n) )
      lala%array = 2.0
      ! build block counts, displacements, and oldtypes
      counts     = (/n/)
      displs     = (/0/)
      types      = (/MPI_REAL/)
      ! make and commit new type
      call MPI_TYPE_CREATE_STRUCT( 1, counts, displs, types, MPI_SMALL, ierr )
      call MPI_TYPE_COMMIT( MPI_SMALL, ierr )
      if (iam .eq. 0) then
            ! reset the value of the array
            lala%array  = 1.0
            call MPI_SEND( lala, 1, MPI_SMALL, 1, 1, MPI_COMM_WORLD, ierr)       ! this doesn't work
            !call MPI_SEND( lala%array, n, MPI_REAL, 1, 1, MPI_COMM_WORLD, ierr) ! this does work
            write (*,*) "iam ",iam," and lala%array(1)  = ", lala%array(1)
      else
            call MPI_RECV( lala, 1, MPI_SMALL, 0, 1, MPI_COMM_WORLD, stat, ierr )       ! this doesn't work
            !call MPI_RECV( lala%array, n, MPI_REAL, 0, 1, MPI_COMM_WORLD, stat, ierr ) ! this does work
            write (*,*) "iam ",iam," and lala%array(1)  = ", lala%array(1), " ( should be 1.0)"
      end if
      call MPI_FINALIZE(ierr)
end program mpi_struct_example

Reply via email to