Hi Thomas,
I suggest to have a look at Fortran2003 so-called "parameterized derived types", where the size of one or more members of a derived type (dTLRdr in this case) depend on a size parameter which is defined when instantiating/allocating the whole derived type and are not allocated separately from the derived type storage; this should -in theory- guarantee that the data belonging to a single instance of a derived type, including all its array members with a variable size, are contiguous in memory. I am not sure, from your description, whether this could simplify your task, but if so, it could be worth to try. A drawback is that this feature is not well supported by GNU gfortran compiler yet, so you need a commercial compiler.
        Best regards, Davide

On 16/03/2015 17:00, users-requ...@open-mpi.org wrote:
Date: Sun, 15 Mar 2015 19:45:01 +0000
From: Thomas Markovich <thomasmarkov...@gmail.com>
To: us...@open-mpi.org
Subject: [OMPI users] Custom MPI_DATA_TYPE with allocatable arrays in
        fortran derived data type
Message-ID:
        <CAKgEOENWLQV52YrthXED8noRG5oX=1yn3faxgys+v23jvxu...@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Hi All,

I'm trying to parallelize my code by distributing the computation of
various elements of a lookup table and then sync that lookup table across
all nodes. To make the code easier to read, and to keep track of everything
easier, I've decided to use a derived data type in fortran defined as
follows:
     type tlr_lut
         sequence
         integer p
         integer q
         real(dp), dimension(3, 3) :: TLR
         real(dp), dimension(:, :, :, :) :: dTLRdr
         real(dp), dimension(3, 3, 3, 3) :: dTLRdh
         integer unique_ind
     end type tlr_lut

and this works quite well in serial. I just have to allocate dTLRdr at run
time. This is because TLR should be size 3x3xNx3, where N is a constant
known at run time but not compile time. I've tried to create a custom data
type to tell open-mpi what the size should be but I'm at a loss for how to
deal with the allocatable array. I've tried something like this:

type(tlr_lut), dimension(:), allocatable :: tlr_lookup, temp_tlr_lookup
type(tlr_lut), dimension(:), allocatable :: local_tlr_lookup
integer :: datatype, oldtypes(6), blockcounts(6)
INTEGER(KIND=MPI_ADDRESS_KIND) :: offsets(6)
integer :: numtasks, rank, i,  ierr
integer :: n, status(mpi_status_size)

do i = 1, num_pairs, 1
     p = unique_pairs(i)%p
     q = unique_pairs(i)%q
     cpuid = unique_pairs(i)%cpu
     if(cpuid.eq.me_image) then
         TLR = 0.0_DP
         dTLRdr = 0.0_DP
         dTLRdh = 0.0_DP
         call mbdvdw_TLR(p, q, TLR, dTLRdr, dTLRdh)
         if(.not.allocated(local_tlr_lookup(counter)%dTLRdr))
allocate(local_tlr_lookup(counter)%dTLRdr(3, 3, nat, 3))
         local_tlr_lookup(counter)%p = p
         local_tlr_lookup(counter)%q = q
         local_tlr_lookup(counter)%TLR(:, :) = TLR(:, :)
         local_tlr_lookup(counter)%dTLRdr(:,:,:,:) = dTLRdR(:,:,:,:)
         local_tlr_lookup(counter)%dTLRdh(:,:,:,:) = dTLRdh(:,:,:,:)
     end if
end do

call mpi_get_address(local_tlr_lookup(1)%p,          offsets(1), ierr)
call mpi_get_address(local_tlr_lookup(1)%q,          offsets(2), ierr)
call mpi_get_address(local_tlr_lookup(1)%wtlr,       offsets(3), ierr)
call mpi_get_address(local_tlr_lookup(1)%wdtlrdr,    offsets(4), ierr)
call mpi_get_address(local_tlr_lookup(1)%wdtlrdh,    offsets(5), ierr)
call mpi_get_address(local_tlr_lookup(1)%unique_ind, offsets(6), ierr)

do i = 2, size(offsets)
   offsets(i) = offsets(i) - offsets(1)
end do
offsets(1) = 0

oldtypes = (/mpi_integer, mpi_integer, mpi_real, mpi_real, mpi_real,
mpi_integer/)
blockcounts = (/1, 1, 3*3, 3*3*nat*3, 3*3*3*3, 1/)

But it didn't seem to work and I'm sorta at a loss. Any suggestions?

Best,
Thomas

--
============================= Davide Cesari ============================
Dott**(0.5) Davide Cesari
ARPA-Emilia Romagna, Servizio IdroMeteoClima
NWP modelling - Modellistica numerica previsionale
Tel. +39 051525926
========================================================================

Reply via email to