Dear Gilles, Dear all,
It works. The only thing that is missed is:
*CALL MPI_Finalize(MPI%iErr)*
at the end of the program.
Now, I have to test it sending some data from a processor to another.
I would like to ask you if you could explain me what you have done.
I wrote in the program:
* IF(MPI%myrank==1)THEN*
* WRITE(*,*) DISPLACEMENTS*
* ENDIF*
and the results is:
*139835891001320 -139835852218120 -139835852213832*
* -139835852195016 8030673735967299609*
I am not able to understand it.
Thanks a lot.
In the attachment you can find the program
Diego
On 4 January 2015 at 12:10, Gilles Gouaillardet <
[email protected]> wrote:
> Diego,
>
> here is an updated revision i will double check tomorrow
> /* i dit not test it yet, so forgive me it it does not compile/work */
>
> Cheers,
>
> Gilles
>
> On Sun, Jan 4, 2015 at 6:48 PM, Diego Avesani <[email protected]>
> wrote:
>
>> Dear Gilles, Dear all,
>>
>> in the attachment you can find the program.
>>
>> What do you meam "remove mpi_get_address(dummy) from all displacements".
>>
>> Thanks for all your help
>>
>> Diego
>>
>>
>>
>> Diego
>>
>>
>> On 3 January 2015 at 00:45, Gilles Gouaillardet <
>> [email protected]> wrote:
>>
>>> Diego,
>>>
>>> George gave you the solution,
>>>
>>> The snippet you posted has two mistakes
>>> You did not remove mpi_get_address(dummy) from all displacements
>>> (See my previous reply)
>>> You pass incorrect values to mpi_type_create_resized
>>>
>>> Can you post a trimmed version of your program instead of a snippet ?
>>>
>>> Gus is right about using double precision vs real and -r8
>>>
>>> Cheers,
>>>
>>> Gilles
>>>
>>> Diego Avesani <[email protected]>さんのメール:
>>> Dear Gilles Dear all,
>>>
>>> I have done all that to avoid to pedding an integer, as suggested by
>>> George.
>>> I define tParticle as a common object.
>>> I am using Intel fortran compiler.
>>>
>>> George suggests:
>>>
>>> *"" The displacements are relative to the benign of your particle type.
>>> Thus the first one is not 0 but the displacement of “integer :: ip” due to
>>> the fact that the compiler is allowed to introduce gaps in order to better
>>> align.*
>>>
>>> * DISPLACEMENTS(1)=MPI_GET_ADDRESS(dummy%ip)*
>>> * DISPLACEMENTS(2)=**MPI_GET_ADDRESS(dummy%RP[1])*
>>>
>>> * DISPLACEMENTS(3)=**MPI_GET_ADDRESS(dummy%QQ[1])*
>>>
>>> *and then remove the MPI_GET_ADDRESS(dummy) from all of them.*
>>>
>>> *3. After creating the structure type you need to resize it in order to
>>> correctly determine the span of the entire structure, and how an array of
>>> such structures lays in memory. Something like:*
>>> *MPI_TYPE_CREATE_RESIZED(old type, DISPLACEMENT(1),*
>>> * MPI_GET_ADDRESS(dummy[2]) - MPI_GET_ADDRESS(dummy[1]), newt) ""*
>>>
>>> What do you think?
>>> George, Did i miss something?
>>>
>>> Thanks a lot
>>>
>>>
>>>
>>> Diego
>>>
>>>
>>> On 2 January 2015 at 12:51, Gilles Gouaillardet <
>>> [email protected]> wrote:
>>>
>>>> Diego,
>>>>
>>>> First, i recommend you redefine tParticle and add a padding integer so
>>>> everything is aligned.
>>>>
>>>>
>>>> Before invoking MPI_Type_create_struct, you need to
>>>> call MPI_Get_address(dummy, base, MPI%err)
>>>> displacements = displacements - base
>>>>
>>>> MPI_Type_create_resized might be unnecessary if tParticle is aligned
>>>> And the lower bound should be zero.
>>>>
>>>> BTW, which compiler are you using ?
>>>> Is tParticle object a common ?
>>>> iirc, intel compiler aligns types automatically, but not commons, and
>>>> that means MPI_Type_create_struct is not aligned as it should most of the
>>>> time.
>>>>
>>>> Cheers,
>>>>
>>>> Gilles
>>>>
>>>> Diego Avesani <[email protected]>さんのメール:
>>>>
>>>> dear all,
>>>>
>>>> I have a problem with MPI_Type_Create_Struct and
>>>> MPI_TYPE_CREATE_RESIZED.
>>>>
>>>> I have this variable type:
>>>>
>>>> * TYPE tParticle*
>>>> * INTEGER :: ip*
>>>> * REAL :: RP(2)*
>>>> * REAL :: QQ(2)*
>>>> * ENDTYPE tParticle*
>>>>
>>>> Then I define:
>>>>
>>>> Nstruct=3
>>>> *ALLOCATE(TYPES(Nstruct))*
>>>> *ALLOCATE(LENGTHS(Nstruct))*
>>>> *ALLOCATE(DISPLACEMENTS(Nstruct))*
>>>> *!set the types*
>>>> *TYPES(1) = MPI_INTEGER*
>>>> *TYPES(2) = MPI_DOUBLE_PRECISION*
>>>> *TYPES(3) = MPI_DOUBLE_PRECISION*
>>>> *!set the lengths*
>>>> *LENGTHS(1) = 1*
>>>> *LENGTHS(2) = 2*
>>>> *LENGTHS(3) = 2*
>>>>
>>>> As gently suggested by Nick Papior Andersen and George Bosilca some
>>>> months ago, I checked the variable adress to resize my struct variable to
>>>> avoid empty space and
>>>> to have a more general definition.
>>>>
>>>> * !*
>>>> * CALL MPI_GET_ADDRESS(dummy%ip, DISPLACEMENTS(1), MPI%iErr)*
>>>> * CALL MPI_GET_ADDRESS(dummy%RP(1), DISPLACEMENTS(2), MPI%iErr)*
>>>> * CALL MPI_GET_ADDRESS(dummy%QQ(1), DISPLACEMENTS(3), MPI%iErr)*
>>>> * !*
>>>> * CALL
>>>> MPI_Type_Create_Struct(Nstruct,LENGTHS,DISPLACEMENTS,TYPES,MPI_PARTICLE_TYPE_OLD,MPI%iErr)*
>>>> * CALL MPI_Type_Commit(MPI_PARTICLE_TYPE_OLD,MPI%iErr)*
>>>> * !*
>>>> * CALL MPI_TYPE_CREATE_RESIZED(MPI_PARTICLE_TYPE_OLD,
>>>> DISPLACEMENTS(1),DISPLACEMENTS(2) - DISPLACEMENTS(1), MPI_PARTICLE_TYPE)*
>>>>
>>>>
>>>> This does not work. When my program run, I get an error:
>>>>
>>>> *forrtl: severe (174): SIGSEGV, segmentation fault occurred.*
>>>>
>>>> I have read the manual but probably I am not able to understand
>>>> *MPI_TYPE_CREATE_RESIZED. *
>>>>
>>>> Someone could help me?
>>>>
>>>>
>>>> Thanks a lot
>>>> Diego
>>>>
>>>>
>>>> Diego
>>>>
>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> [email protected]
>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> Link to this post:
>>>> http://www.open-mpi.org/community/lists/users/2015/01/26092.php
>>>>
>>>
>>>
>>> _______________________________________________
>>> users mailing list
>>> [email protected]
>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> Link to this post:
>>> http://www.open-mpi.org/community/lists/users/2015/01/26097.php
>>>
>>
>>
>> _______________________________________________
>> users mailing list
>> [email protected]
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2015/01/26099.php
>>
>
>
> _______________________________________________
> users mailing list
> [email protected]
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/01/26100.php
>
MODULE MOD_PRECISION
integer, parameter :: dp = selected_real_kind(p=16)
ENDMODULE
PROGRAM PROVA_STRUCT
USE MOD_PRECISION
IMPLICIT NONE
INCLUDE 'mpif.h'
!
TYPE tMPI
INTEGER :: myrank, nCPU, iErr, status
END TYPE tMPI
!
type particle
sequence
integer :: ip
real(dp) :: rp(2)
real(dp) :: QQ(4)
end type particle
!
TYPE(tMPI) :: MPI
INTEGER :: COMM_CART
INTEGER :: MPI_PARTICLE_TYPE_OLD
INTEGER :: MPI_PARTICLE_TYPE
INTEGER :: nstruct
INTEGER,ALLOCATABLE :: TYPES(:)
INTEGER,ALLOCATABLE :: LENGTHS(:)
INTEGER(MPI_ADDRESS_KIND),ALLOCATABLE,DIMENSION(:) ::DISPLACEMENTS
type(particle) :: dummy(2) ! Used for calculation of displacement
CALL MPI_INIT(MPI%iErr)
CALL MPI_COMM_RANK(MPI_COMM_WORLD, MPI%myrank, MPI%iErr)
CALL MPI_COMM_SIZE(MPI_COMM_WORLD, MPI%nCPU, MPI%iErr)
!
!
nstruct=3
ALLOCATE(TYPES(nstruct))
ALLOCATE(LENGTHS(nstruct))
ALLOCATE(DISPLACEMENTS(0:nstruct+1))
!
TYPES(1)=MPI_INTEGER
TYPES(2)=MPI_DOUBLE_PRECISION
TYPES(3)=MPI_DOUBLE_PRECISION
!
LENGTHS(1)=1
LENGTHS(2)=2
LENGTHS(3)=4
!
!
CALL MPI_GET_ADDRESS(DISPLACEMENTS(0),dummy(1),MPI%iErr)
CALL MPI_GET_ADDRESS(DISPLACEMENTS(1),dummy(1)%ip,MPI%iErr)
CALL MPI_GET_ADDRESS(DISPLACEMENTS(2),dummy(1)%RP(1),MPI%iErr)
CALL MPI_GET_ADDRESS(DISPLACEMENTS(3),dummy(1)%QQ(1),MPI%iErr)
CALL MPI_GET_ADDRESS(DISPLACEMENTS(4),dummy(2),MPI%iErr)
!
DISPLACEMENTS(1:nstruct+1)= DISPLACEMENTS(1:nstruct+1)-DISPLACEMENTS(0)
!
CALL MPI_TYPE_CREATE_STRUCT(nstruct,lengths,displacements,types,MPI_PARTICLE_TYPE_OLD,MPI%iErr)
CALL MPI_TYPE_COMMIT(MPI_PARTICLE_TYPE_OLD,MPI%iErr)
!
!
CALL MPI_TYPE_CREATE_RESIZED(MPI_PARTICLE_TYPE_OLD, DISPLACEMENTS(1), DISPLACEMENTS(4), MPI_PARTICLE_TYPE)
CALL MPI_TYPE_COMMIT(MPI_PARTICLE_TYPE,MPI%iErr)
!
IF(MPI%myrank==1)THEN
WRITE(*,*) DISPLACEMENTS
ENDIF
!
CALL MPI_Finalize(MPI%iErr)
!
ENDPROGRAM