Dear Gilles, Dear All,

What do you mean that the array of requests has to be initialize via
MPI_Isend or MPI_Irecv?

In my code I use three times MPI_Isend and MPI_Irecv so I have
a send_request(3).  According to this, do I have to use MPI_REQUEST_NULL?

In the meantime I check my code

Thanks

Diego


On 29 September 2015 at 16:33, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:

> Diego,
>
> if you invoke MPI_Waitall on three requests, and some of them have not
> been initialized
> (manually, or via MPI_Isend or MPI_Irecv), then the behavior of your
> program is undetermined.
>
> if you want to use array of requests (because it make the program simple)
> but you know not all of them are actually used, then you have to initialize
> them with MPI_REQUEST_NULL
> (it might be zero on ompi, but you cannot take this for granted)
>
> Cheers,
>
> Gilles
>
>
> On Tuesday, September 29, 2015, Diego Avesani <diego.aves...@gmail.com>
> wrote:
>
>> dear Jeff, dear all,
>> I have notice that if I initialize the variables, I do not have the error
>> anymore:
>> !
>>   ALLOCATE(SEND_REQUEST(nMsg),RECV_REQUEST(nMsg))
>>   SEND_REQUEST=0
>>   RECV_REQUEST=0
>> !
>>
>> Could you please explain me why?
>> Thanks
>>
>>
>> Diego
>>
>>
>> On 29 September 2015 at 16:08, Diego Avesani <diego.aves...@gmail.com>
>> wrote:
>>
>>> Dear Jeff, Dear all,
>>> the code is very long, here something. I hope that this could help.
>>>
>>> What do you think?
>>>
>>> SUBROUTINE MATOPQN
>>> USE VARS_COMMON,ONLY:COMM_CART,send_messageR,recv_messageL,nMsg
>>> USE MPI
>>> INTEGER :: send_request(nMsg), recv_request(nMsg)
>>> INTEGER ::
>>> send_status_list(MPI_STATUS_SIZE,nMsg),recv_status_list(MPI_STATUS_SIZE,nMsg)
>>>
>>>  !send message to right CPU
>>>     IF(MPIdata%rank.NE.MPIdata%nCPU-1)THEN
>>>         MsgLength = MPIdata%jmaxN
>>>         DO icount=1,MPIdata%jmaxN
>>>             iNode = MPIdata%nodeList2right(icount)
>>>             send_messageR(icount) = RIS_2(iNode)
>>>         ENDDO
>>>
>>>         CALL MPI_ISEND(send_messageR, MsgLength, MPI_DOUBLE_COMPLEX,
>>> MPIdata%rank+1, MPIdata%rank+1, MPI_COMM_WORLD,
>>> send_request(MPIdata%rank+1), MPIdata%iErr)
>>>
>>>     ENDIF
>>>     !
>>>
>>>
>>>     !recive message FROM left CPU
>>>     IF(MPIdata%rank.NE.0)THEN
>>>         MsgLength = MPIdata%jmaxN
>>>
>>>         CALL MPI_IRECV(recv_messageL, MsgLength, MPI_DOUBLE_COMPLEX,
>>> MPIdata%rank-1, MPIdata%rank, MPI_COMM_WORLD, recv_request(MPIdata%rank),
>>> MPIdata%iErr)
>>>
>>>         write(*,*) MPIdata%rank-1
>>>     ENDIF
>>>     !
>>>     !
>>>     CALL MPI_WAITALL(nMsg,send_request,send_status_list,MPIdata%iErr)
>>>     CALL MPI_WAITALL(nMsg,recv_request,recv_status_list,MPIdata%iErr)
>>>
>>> Diego
>>>
>>>
>>> On 29 September 2015 at 00:15, Jeff Squyres (jsquyres) <
>>> jsquy...@cisco.com> wrote:
>>>
>>>> Can you send a small reproducer program?
>>>>
>>>> > On Sep 28, 2015, at 4:45 PM, Diego Avesani <diego.aves...@gmail.com>
>>>> wrote:
>>>> >
>>>> > Dear all,
>>>> >
>>>> > I have to use a send_request in a MPI_WAITALL.
>>>> > Here the strange things:
>>>> >
>>>> > If I use at the begging of the SUBROUTINE:
>>>> >
>>>> > INTEGER :: send_request(3), recv_request(3)
>>>> >
>>>> > I have no problem, but if I use
>>>> >
>>>> > USE COMONVARS,ONLY : nMsg
>>>> > with nMsg=3
>>>> >
>>>> > and after that I declare
>>>> >
>>>> > INTEGER :: send_request(nMsg), recv_request(nMsg), I get the
>>>> following error:
>>>> >
>>>> > [Lap] *** An error occurred in MPI_Waitall
>>>> > [Lap] *** reported by process [139726485585921,0]
>>>> > [Lap] *** on communicator MPI_COMM_WORLD
>>>> > [Lap] *** MPI_ERR_REQUEST: invalid request
>>>> > [Lap] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will
>>>> now abort,
>>>> > [Lap] ***    and potentially your MPI job)
>>>> > forrtl: error (78): process killed (SIGTERM)
>>>> >
>>>> > Someone could please explain to me where I am wrong?
>>>> >
>>>> > Thanks
>>>> >
>>>> > Diego
>>>> >
>>>> > _______________________________________________
>>>> > users mailing list
>>>> > us...@open-mpi.org
>>>> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> > Link to this post:
>>>> http://www.open-mpi.org/community/lists/users/2015/09/27703.php
>>>>
>>>>
>>>> --
>>>> Jeff Squyres
>>>> jsquy...@cisco.com
>>>> For corporate legal information go to:
>>>> http://www.cisco.com/web/about/doing_business/legal/cri/
>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> Link to this post:
>>>> http://www.open-mpi.org/community/lists/users/2015/09/27704.php
>>>>
>>>
>>>
>>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/09/27710.php
>

Reply via email to