Hi George,

Thank you for your reply. Please see below.
best regards,
Huangwei




On 1 September 2013 22:03, George Bosilca <bosi...@icl.utk.edu> wrote:

>
> On Aug 31, 2013, at 14:56 , Huangwei <hz...@cam.ac.uk> wrote:
>
> Hi All,
>
> I would like to send an array A, which has different dimensions in the
> processors. Then the root receive these As and puts them into another array
> globA. I know MPI_allgatherv can do this. However, there are still some
> implementation issues that are not very clear for me. Thank you very much
> if any of you can give me some suggestions and comments. The piece of code
> is as follows (I am not sure if it is completely correct):
>
>
> !...calculate the total size for the total size of the globA,
> PROCASize(myidf) is the size of array A in each processor.
>
>         allocate(PROCASize(numprocs))
>         PROCASize(myidf) = Asize
>         call
> mpi_allreduce(PROCSize,PROCSize,numprocs,mpi_integer,mpi_sum,MPI_COMM_WORLD,ierr)
>         globAsize = sum(PROCAsize)
>
> !...calculate the RECS and DISP for MPI_allgatherv
>         allocate(RECSASize(0:numprocs-1))
>         allocate(DISP(0:numprocs-1))
>         do i=1,numprocs
>            RECSASize(i-1) = PROCASize(i)
>         enddo
>         call mpi_type_extent(mpi_integer, extent, ierr)
>         do i=1,numprocs
>              DISP(i-1) = 1 + (i-1)*RECSASIze(i-1)*extent
>         enddo
>
> !...allocate the size of the array globA
>         allocate(globA(globASize*extent))
>         call mpi_allgatherv(A,ASize,MPI_INTEGER,globA, RECSASIze,
> DISP,MPI_INTEGER,MPI_COMM_WORLD,ierr)
>
> My Questions:
>
> 1, How to allocate the globA, i.e. the receive buff's size? Should I use 
> globASize*extent
> or justglobalize?
>
>
>
> I don't understand what globASize is supposed to be as you do the
> reduction on PROCSize and then sum PROCAsize.
>


> Here I assume globASize is sum of the size of the array A in all the
> processors. For example, in proc 1, it is A(3), in proc 2, it is A(5), in
> proc 3 it is A(6). so  globSize =14. I aim to put these A arrays to globA
> which is sized as 14. All the data in A are aimed to be stored in globA
> consecutively based on rank number.
>



> Anyway, you should always allocate the memory for collective based on the
> total number of elements to receive times the extent of each element. In
> fact to be even more accurate, if we suppose that you correctly computed
> the DISP array, you should allocate globA as DISP(numprocs-1) + RECSASIze.
>
   If all the elements in all A arrays are integer or all are uniformly
double precision, the size of globA should be 14 or 14*extent_integer?

>
>
>
>
> 2, about the displacements in globA, i.e. DISP(:), it is stand for the
> order of an array? like 1, 2, 3, ...., this corresponds to DISP(i-1) = 1
> + (i-1)*RECSASIze(i-1)*extent. Or this array's elements are the address
> at which the data from different processors will be stored in globA?
>
>
> These are the displacement from the beginning of the array where the data
> from a peer is stored. The index in this array is the rank of the peer
> process in the communicator.
>
> Yes, I know. But I mean  the meaning of the elements of this array. Still
use that example mentioned above. Is the following specification correct:
DISP(1)=0, DISP(2)=3, DISP(3)=8 ?

>
> 3, should the arrays start from 0 to numprocs-1? or start from 1 to
> numprocs? This may be important when they work as arguments in
> mpi_allgatherv subroutine.
>
>
> It doesn't matter how you allocate it (0:numprocs-1) or simple (numprocs)
> the compiler will do the right this when creating the call using the array.
>
>   George.
>

Additional Question is:

For fortran mpi, can the mpi subroutine send array with 0 size, i.e. in the
example, A is A(0), and ASize =0:

call mpi_allgatherv(A,ASize,MPI_INTEGER,globA, RECSASIze,
DISP,MPI_INTEGER,MPI_COMM_WORLD,ierr)

Is this valid in mpi calling? This case will appear in my work.


Thank you very much for your help!

Have a nice holiday!


>
>
>
> These questions may be too simple for MPI professionals, but I do not have
> much experience on this. Thus I am sincerely eager to get some comments and
> suggestions from you. Thank you in advance!
>
>
> regards,
> Huangwei
>
>
>
>
>  _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to