Richard,

Thanks for identifying this issue and for the short example. I can confirm your 
original understanding was right, the upper bound should be identical on all 
ranks. I just pushed a patch (r26862), let me know if this fixes your issue.

  Thanks,
    george.

On Jul 24, 2012, at 17:27 , Richard Shaw wrote:

> I've been speaking off line to Jonathan Dursi about this problem. And it does 
> seems to be a bug.
> 
> The same problem crops up in a simplified 1d only case (test case attached). 
> In this instance the specification seems to be comprehensible - looking at 
> the pdf copy of MPI-2.2 spec, p92-93, the definition of cyclic gives 
> MPI_LB=0, MPI_UB=gsize*ex.
> 
> Test case is creating a data type for an array of 10 doubles, cyclicly 
> distributed across two processes with a block size of 1. Expected extent is 
> 10*extent(MPI_DOUBLE) = 80. Results for OpenMPI v 1.4.4:
> 
> $ mpirun -np 2 ./testextent1d
> Rank 0, size=40, extent=80, lb=0
> Rank 1, size=40, extent=88, lb=0
> 
> 
> Can anyone else confirm this?
> 
> Thanks
> Richard
> 
> On Sunday, 15 July, 2012 at 6:21 PM, Richard Shaw wrote:
> 
>> Hello,
>> 
>> I'm getting thoroughly confused trying to work out what is the correct 
>> extent of a block-cyclic distributed array type (created with 
>> MPI_Type_create_darray), and I'm hoping someone can clarify it for me.
>> 
>> My expectation is that calling MPI_Get_extent on this type should return the 
>> size of the original, global, array in bytes, whereas MPI_Type_size gives 
>> the size of the local section. This isn't really clear from the MPI 2.2 
>> spec, but from reading around it sound like that's the obvious thing to 
>> expect.
>> 
>> I've attached a minimal C example which tests this behaviour, it creates a 
>> type which views a 10x10 array of doubles, in 3x3 blocks with a 2x2 process 
>> grid. So my expectation is that the extent is 10*10*sizeof(double) = 800. 
>> I've attached the results from running this below.
>> 
>> In practice both versions of OpenMPI (v1.4.4 and v1.6) I've tested don't 
>> give the behaviour I expect. It gives the correct type size on all 
>> processes, but only the rank 0 process gets the expected extent, all the 
>> others get a somewhat higher value. As a comparison IntelMPI (v4.0.3) does 
>> give the expected value for the extent (included below).
>> 
>> I'd be very grateful if someone could explain what the extent means for a 
>> darray type? And why it isn't the global array size?
>> 
>> Thanks,
>> Richard
>> 
>> 
>> 
>> == OpenMPI (v1.4.4 and 1.6) ==
>> 
>> $ mpirun -np 4 ./testextent
>> Rank 0, size=288, extent=800, lb=0
>> Rank 1, size=192, extent=824, lb=0
>> Rank 2, size=192, extent=1040, lb=0
>> Rank 3, size=128, extent=1064, lb=0
>> 
>> 
>> 
>> == IntelMPI ==
>> 
>> $ mpirun -np 4 ./testextent
>> Rank 0, size=288, extent=800, lb=0
>> Rank 1, size=192, extent=800, lb=0
>> Rank 2, size=192, extent=800, lb=0
>> Rank 3, size=128, extent=800, lb=0
>> 
>> Attachments:
>> - testextent.c
> 
> <testextent1d.c>_______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to