Hi Jody
jody wrote:
Guys - Thank You for your replies!
(wow : that was a rhyme! :) )
I checked my structure with the offsetof macro on my laptop at home
and found the following offsets:
offs iSpeciesID: 0
offs sCapacityFile: 2
offs adGParams: 68
total size 100
so there seems to be a 2 byte gap before the double array;
and this machine seems to prefer multiples of 4.
A 32-bit laptop perhaps?
I would guess the offsets are machine and compiler dependent,
and optimization flags may matter.
But is this alignment problem not also a danger for heterogeneous clusters
using OpenMPI?
Do you mean danger or excitement? :)
If the doubles and shorts and long longs have different sizes on
each of two heterogeneous nodes, what could MPI do about them anyway?
I guess the only portable solution is to forget about MPI Data types and
somehow pack or serialize the data before sending and unpack/deserialize
after receiving it.
Jody:
Jeff may have a heart attack when he reads what you just wrote about
the usefulness of MPI data types vs. packing/unpacking. :)
Guessing away, I would think you are focusing on memory/space savings,
rather than on performance.
Maybe memory/space savings is part of your code requirements.
However, have you tried instead to explicitly pad your structure,
say, to a multiple of the size of your largest intrinsic type,
which double in your case, or perhaps to a multiple of the natural
memory alignment boundary that your computer/compiler likes (which may
be 8 bytes, 16 bytes, 128 bytes, whatever).
I never did this comparison, but I would guess the padded version
of the code would run faster (if compiled with '-align' type of flag
and friends).
Anyway, C is a foreign language here, I must say.
Just my unwarranted guesses.
Gus Correa
On Wed, Jun 29, 2011 at 6:18 PM, Gus Correa <g...@ldeo.columbia.edu> wrote:
jody wrote:
Hi
I have noticed on my machine that a struct which i have defined as
typedef struct {
short iSpeciesID;
char sCapacityFile[SHORT_INPUT];
double adGParams[NUM_GPARAMS];
} tVStruct;
(where SHORT_INPUT=64 and NUM_GPARAMS=4)
has size 104 (instead of 98) whereas the corresponding MPI Datatype i
created
int aiLengthsT5[3] = {1, SHORT_INPUT, NUM_GPARAMS};
MPI_Aint aiDispsT5[3] = {0, iShortSize, iShortSize+SHORT_INPUT};
MPI_Datatype aTypesT5[3] = {MPI_UNSIGNED_SHORT, MPI_CHAR, MPI_DOUBLE};
MPI_Type_create_struct(3, aiLengthsT5, aiDispsT5, aTypesT5,
&m_dtVegetationData3);
MPI_Type_commit(&m_dtVegetationData3);
only has length 98 (as expected). The size differences resulted in an
error when doing
tVegetationData3 VD;
MPI_Send(&VD, 1, m_dtVegetationData3, 1, TAG_STEP_CMD, MPI_COMM_WORLD);
and the corresponding
tVegetationData3 VD;
MPI_Recv(&VD, 1, m_dtVegetationData3, MPI_ANY_SOURCE,
TAG_STEP_CMD, MPI_COMM_WORLD, &st);
(in fact, the last double in my array was not transmitted correctly)
It seems that on my machine the struct was padded to a multiple of 8.
By manually adding some padding bytes to my MPI Datatype in order
to fill it up to the next multiple of 8 i could work around this problem.
(not very nice, and very probably not portable)
My question: is there a way to tell MPI to automatically use the
required padding?
Thank You
Jody
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
Hi Jody
My naive guesses:
I think when you create the MPI structure you can pass the
byte displacement of each structure component.
You would need to modify your aiDispsT5[3], to match the
actual memory alignment, I guess.
Yes, indeed portability may be sacrificed.
There is some clarification in "MPI, The Complete Reference, Vol 1,
2nd Ed, Marc Snir et al.".
Section 3.2 and 3.3 (general on type map & type signature).
Section 3.4.8 MPI_Type_create_struct (examples, specially 3.13).
Section 3.10, on portability, doesn't seem to guarantee portability of
MPI_Type_Struct.
I hope this helps,
Gus Correa
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users