Dear Gilles,
sorry to ask you again and to be frustrating,
basically is this what I shall do for each CPU:

CALL MPI_ISEND(send_messageL, MsgLength, MPI_DOUBLE_COMPLEX,
MPIdata%rank-1, MPIdata%rank, MPI_COMM_WORLD, REQUEST(1), MPIdata%iErr)


CALL MPI_IRECV(recv_messageR, MsgLength, MPI_DOUBLE_COMPLEX,
MPIdata%rank+1, MPIdata%rank+1, MPI_COMM_WORLD, REQUEST(2), MPIdata%iErr)

and then

CALL MPI_WAITALL(nMsg,REQUEST(1:2),send_status_list,MPIdata%iErr)

Am I correct?

Thanks thanks again

Diego


On 30 September 2015 at 17:18, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:

> Diego,
>
> there is some confusion here...
> MPI_Waitall is not a collective operations, and a given tasks can only
> wait the requests it initiated.
>
> bottom line, each task does exactly one send and one recv, right ?
> in this case, you want to have an array of two requests, isend with the
> first element and irecv with the second element, and then waitall the array
> of size 2
> note this is not equivalent to doing two MPI_Wait in a row, since that
> would be prone to deadlock
>
> Cheers,
>
> Gilles
>
> On Wednesday, September 30, 2015, Diego Avesani <diego.aves...@gmail.com>
> wrote:
>
>> Dear all,
>> thank for the explanation, but something is not clear to me.
>> I have 4 CPUs. I use only three of them to send, let say:
>> CPU 0 send to CPU 1
>> CPU 1 send to CPU 2
>> CPU 2 send to CPU 3
>>
>> only three revive, let's say;
>> CPU 1 from CPU 0
>> CPU 2 from CPU 1
>> CPU 3 from CPU 2
>>
>> so I use ALLOCATE(send_request(3))
>>
>> this mean that in the call I have:
>> CALL MPI_ISEND(send_messageL, MsgLength, MPI_DOUBLE_COMPLEX,
>> MPIdata%rank-1, MPIdata%rank, MPI_COMM_WORLD,
>> *send_request(MPIdata%rank)*, MPIdata%iErr)
>>
>> This is what my code does. Problably, the use of send_request(:) as a
>> vectror and the use of WAITALL  is not correct, I am right?
>>
>> what do you suggest?
>>
>> Thanks a lot,
>> Diego
>>
>>
>> Diego
>>
>>
>> On 30 September 2015 at 12:42, Jeff Squyres (jsquyres) <
>> jsquy...@cisco.com> wrote:
>>
>>> Put differently:
>>>
>>> - You have an array of N requests
>>> - If you're only filling up M of them (where N<M)
>>> - And then you pass the whole array of size N to MPI
>>> - Then N-M of them will have garbage values (unless you initialize them
>>> to MPI_REQUEST_NULL)
>>> - And MPI's behavior with garbage values will be unpredictable /
>>> undefined
>>>
>>> You can either pass M (i.e., the number of requests that you have
>>> *actually* filled) to MPI, or you can ensure that the N-M unused requests
>>> in the array are filled with MPI_REQUEST_NULL (which MPI_WAITANY and
>>> friends will safely ignore).  One way of doing the latter is initializing
>>> the entire array with MPI_REQUEST_NULL and then only filling in the M
>>> entries with real requests.
>>>
>>> It seems much simpler / faster to just pass in M to MPI_WAITANY (any
>>> friends), not N.
>>>
>>>
>>> > On Sep 30, 2015, at 3:43 AM, Diego Avesani <diego.aves...@gmail.com>
>>> wrote:
>>> >
>>> > Dear Gilles, Dear All,
>>> >
>>> > What do you mean that the array of requests has to be initialize via
>>> MPI_Isend or MPI_Irecv?
>>> >
>>> > In my code I use three times MPI_Isend and MPI_Irecv so I have a
>>> send_request(3).  According to this, do I have to use MPI_REQUEST_NULL?
>>> >
>>> > In the meantime I check my code
>>> >
>>> > Thanks
>>> >
>>> > Diego
>>> >
>>> >
>>> > On 29 September 2015 at 16:33, Gilles Gouaillardet <
>>> gilles.gouaillar...@gmail.com> wrote:
>>> > Diego,
>>> >
>>> > if you invoke MPI_Waitall on three requests, and some of them have not
>>> been initialized
>>> > (manually, or via MPI_Isend or MPI_Irecv), then the behavior of your
>>> program is undetermined.
>>> >
>>> > if you want to use array of requests (because it make the program
>>> simple) but you know not all of them are actually used, then you have to
>>> initialize them with MPI_REQUEST_NULL
>>> > (it might be zero on ompi, but you cannot take this for granted)
>>> >
>>> > Cheers,
>>> >
>>> > Gilles
>>> >
>>> >
>>> > On Tuesday, September 29, 2015, Diego Avesani <diego.aves...@gmail.com>
>>> wrote:
>>> > dear Jeff, dear all,
>>> > I have notice that if I initialize the variables, I do not have the
>>> error anymore:
>>> > !
>>> >   ALLOCATE(SEND_REQUEST(nMsg),RECV_REQUEST(nMsg))
>>> >   SEND_REQUEST=0
>>> >   RECV_REQUEST=0
>>> > !
>>> >
>>> > Could you please explain me why?
>>> > Thanks
>>> >
>>> >
>>> > Diego
>>> >
>>> >
>>> > On 29 September 2015 at 16:08, Diego Avesani <diego.aves...@gmail.com>
>>> wrote:
>>> > Dear Jeff, Dear all,
>>> > the code is very long, here something. I hope that this could help.
>>> >
>>> > What do you think?
>>> >
>>> > SUBROUTINE MATOPQN
>>> > USE VARS_COMMON,ONLY:COMM_CART,send_messageR,recv_messageL,nMsg
>>> > USE MPI
>>> > INTEGER :: send_request(nMsg), recv_request(nMsg)
>>> > INTEGER ::
>>> send_status_list(MPI_STATUS_SIZE,nMsg),recv_status_list(MPI_STATUS_SIZE,nMsg)
>>> >
>>> >  !send message to right CPU
>>> >     IF(MPIdata%rank.NE.MPIdata%nCPU-1)THEN
>>> >         MsgLength = MPIdata%jmaxN
>>> >         DO icount=1,MPIdata%jmaxN
>>> >             iNode = MPIdata%nodeList2right(icount)
>>> >             send_messageR(icount) = RIS_2(iNode)
>>> >         ENDDO
>>> >
>>> >         CALL MPI_ISEND(send_messageR, MsgLength, MPI_DOUBLE_COMPLEX,
>>> MPIdata%rank+1, MPIdata%rank+1, MPI_COMM_WORLD,
>>> send_request(MPIdata%rank+1), MPIdata%iErr)
>>> >
>>> >     ENDIF
>>> >     !
>>> >
>>> >
>>> >     !recive message FROM left CPU
>>> >     IF(MPIdata%rank.NE.0)THEN
>>> >         MsgLength = MPIdata%jmaxN
>>> >
>>> >         CALL MPI_IRECV(recv_messageL, MsgLength, MPI_DOUBLE_COMPLEX,
>>> MPIdata%rank-1, MPIdata%rank, MPI_COMM_WORLD, recv_request(MPIdata%rank),
>>> MPIdata%iErr)
>>> >
>>> >         write(*,*) MPIdata%rank-1
>>> >     ENDIF
>>> >     !
>>> >     !
>>> >     CALL MPI_WAITALL(nMsg,send_request,send_status_list,MPIdata%iErr)
>>> >     CALL MPI_WAITALL(nMsg,recv_request,recv_status_list,MPIdata%iErr)
>>> >
>>> > Diego
>>> >
>>> >
>>> > On 29 September 2015 at 00:15, Jeff Squyres (jsquyres) <
>>> jsquy...@cisco.com> wrote:
>>> > Can you send a small reproducer program?
>>> >
>>> > > On Sep 28, 2015, at 4:45 PM, Diego Avesani <diego.aves...@gmail.com>
>>> wrote:
>>> > >
>>> > > Dear all,
>>> > >
>>> > > I have to use a send_request in a MPI_WAITALL.
>>> > > Here the strange things:
>>> > >
>>> > > If I use at the begging of the SUBROUTINE:
>>> > >
>>> > > INTEGER :: send_request(3), recv_request(3)
>>> > >
>>> > > I have no problem, but if I use
>>> > >
>>> > > USE COMONVARS,ONLY : nMsg
>>> > > with nMsg=3
>>> > >
>>> > > and after that I declare
>>> > >
>>> > > INTEGER :: send_request(nMsg), recv_request(nMsg), I get the
>>> following error:
>>> > >
>>> > > [Lap] *** An error occurred in MPI_Waitall
>>> > > [Lap] *** reported by process [139726485585921,0]
>>> > > [Lap] *** on communicator MPI_COMM_WORLD
>>> > > [Lap] *** MPI_ERR_REQUEST: invalid request
>>> > > [Lap] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will
>>> now abort,
>>> > > [Lap] ***    and potentially your MPI job)
>>> > > forrtl: error (78): process killed (SIGTERM)
>>> > >
>>> > > Someone could please explain to me where I am wrong?
>>> > >
>>> > > Thanks
>>> > >
>>> > > Diego
>>> > >
>>> > > _______________________________________________
>>> > > users mailing list
>>> > > us...@open-mpi.org
>>> > > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> > > Link to this post:
>>> http://www.open-mpi.org/community/lists/users/2015/09/27703.php
>>> >
>>> >
>>> > --
>>> > Jeff Squyres
>>> > jsquy...@cisco.com
>>> > For corporate legal information go to:
>>> http://www.cisco.com/web/about/doing_business/legal/cri/
>>> >
>>> > _______________________________________________
>>> > users mailing list
>>> > us...@open-mpi.org
>>> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> > Link to this post:
>>> http://www.open-mpi.org/community/lists/users/2015/09/27704.php
>>> >
>>> >
>>> >
>>> > _______________________________________________
>>> > users mailing list
>>> > us...@open-mpi.org
>>> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> > Link to this post:
>>> http://www.open-mpi.org/community/lists/users/2015/09/27710.php
>>> >
>>> > _______________________________________________
>>> > users mailing list
>>> > us...@open-mpi.org
>>> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> > Link to this post:
>>> http://www.open-mpi.org/community/lists/users/2015/09/27721.php
>>>
>>>
>>> --
>>> Jeff Squyres
>>> jsquy...@cisco.com
>>> For corporate legal information go to:
>>> http://www.cisco.com/web/about/doing_business/legal/cri/
>>>
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> Link to this post:
>>> http://www.open-mpi.org/community/lists/users/2015/09/27727.php
>>>
>>
>>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/09/27742.php
>

Reply via email to