Hello,

I'm getting thoroughly confused trying to work out what is the correct extent 
of a block-cyclic distributed array type (created with MPI_Type_create_darray), 
and I'm hoping someone can clarify it for me.

My expectation is that calling MPI_Get_extent on this type should return the 
size of the original, global, array in bytes, whereas MPI_Type_size gives the 
size of the local section. This isn't really clear from the MPI 2.2 spec, but 
from reading around it sound like that's the obvious thing to expect.

I've attached a minimal C example which tests this behaviour, it creates a type 
which views a 10x10 array of doubles, in 3x3 blocks with a 2x2 process grid. So 
my expectation is that the extent is 10*10*sizeof(double) = 800. I've attached 
the results from running this below.

In practice both versions of OpenMPI (v1.4.4 and v1.6) I've tested don't give 
the behaviour I expect. It gives the correct type size on all processes, but 
only the rank 0 process gets the expected extent, all the others get a somewhat 
higher value. As a comparison IntelMPI (v4.0.3) does give the expected value 
for the extent (included below).

I'd be very grateful if someone could explain what the extent means for a 
darray type? And why it isn't the global array size?

Thanks,
Richard



== OpenMPI (v1.4.4 and 1.6) == 

$ mpirun -np 4 ./testextent
Rank 0, size=288, extent=800, lb=0
Rank 1, size=192, extent=824, lb=0
Rank 2, size=192, extent=1040, lb=0
Rank 3, size=128, extent=1064, lb=0



== IntelMPI ==

$ mpirun -np 4 ./testextent
Rank 0, size=288, extent=800, lb=0
Rank 1, size=192, extent=800, lb=0
Rank 2, size=192, extent=800, lb=0
Rank 3, size=128, extent=800, lb=0



Attachment: testextent.c
Description: Binary data

Reply via email to