Re: [OMPI users] send_request error with allocate

2015-09-30 Thread Diego Avesani
Dear Jeff, Dear Gilles, Dear All, now is all more clear. I use CALL MPI_ISEND and CALL MPI_IRECV. Each CPU send once and revive once, this implies that I have REQUEST(2) for WAITALL. However, sometimes dome CPU does not send or receive anything, so I have to set REQUEST = MPI_REQUEST_NULL in order

Re: [OMPI users] send_request error with allocate

2015-09-30 Thread Jeff Squyres (jsquyres)
On Sep 30, 2015, at 4:41 PM, Diego Avesani wrote: > > Dear Gilles, > sorry to ask you again and to be frustrating, > basically is this what I shall do for each CPU: > > CALL MPI_ISEND(send_messageL, MsgLength, MPI_DOUBLE_COMPLEX, MPIdata%rank-1, > MPIdata%rank, MPI_COMM_WORLD, REQUEST(1), MPIda

Re: [OMPI users] send_request error with allocate

2015-09-30 Thread Diego Avesani
Dear Gilles, sorry to ask you again and to be frustrating, basically is this what I shall do for each CPU: CALL MPI_ISEND(send_messageL, MsgLength, MPI_DOUBLE_COMPLEX, MPIdata%rank-1, MPIdata%rank, MPI_COMM_WORLD, REQUEST(1), MPIdata%iErr) CALL MPI_IRECV(recv_messageR, MsgLength, MPI_DOUBLE_COMP

Re: [OMPI users] send_request error with allocate

2015-09-30 Thread Gilles Gouaillardet
Diego, there is some confusion here... MPI_Waitall is not a collective operations, and a given tasks can only wait the requests it initiated. bottom line, each task does exactly one send and one recv, right ? in this case, you want to have an array of two requests, isend with the first element an

Re: [OMPI users] send_request error with allocate

2015-09-30 Thread Diego Avesani
Do you have some suggestions? Is there any possibilities to use not a vector as send_request and at the same time to have a WAIT? regarding the code, you are perfectly right, I hope to improve in future Thanks again Diego On 30 September 2015 at 16:50, Jeff Squyres (jsquyres) wrote: > I don'

Re: [OMPI users] send_request error with allocate

2015-09-30 Thread Jeff Squyres (jsquyres)
I don't think that this pattern was obvious from the code snippet you sent, which is why I asked for a small, self-contained reproducer. :-) I don't know offhand how send_request(:) will be passed to C. > On Sep 30, 2015, at 10:16 AM, Diego Avesani wrote: > > Dear all, > thank for the explan

Re: [OMPI users] send_request error with allocate

2015-09-30 Thread Diego Avesani
Dear all, thank for the explanation, but something is not clear to me. I have 4 CPUs. I use only three of them to send, let say: CPU 0 send to CPU 1 CPU 1 send to CPU 2 CPU 2 send to CPU 3 only three revive, let's say; CPU 1 from CPU 0 CPU 2 from CPU 1 CPU 3 from CPU 2 so I use ALLOCATE(send_requ

Re: [OMPI users] send_request error with allocate

2015-09-30 Thread Jeff Squyres (jsquyres)
Put differently: - You have an array of N requests - If you're only filling up M of them (where N On Sep 30, 2015, at 3:43 AM, Diego Avesani wrote: > > Dear Gilles, Dear All, > > What do you mean that the array of requests has to be initialize via > MPI_Isend or MPI_Irecv? > > In my code I us

[OMPI users] send_request error with allocate

2015-09-30 Thread Gilles Gouaillardet
Diego, if you invoke 3 times Isend with three different send__request() and sane thing for Irecv, then you do not have to worry about MPI_REQUEST_NULL Based on your snippet, there could be an issue on ranks 0 and n-1, also the index of send_request is MPIdata%rank+1 if MPIdata%rank is MPi_Comm_ra

Re: [OMPI users] send_request error with allocate

2015-09-30 Thread Diego Avesani
Dear Gilles, Dear All, What do you mean that the array of requests has to be initialize via MPI_Isend or MPI_Irecv? In my code I use three times MPI_Isend and MPI_Irecv so I have a send_request(3). According to this, do I have to use MPI_REQUEST_NULL? In the meantime I check my code Thanks Di

Re: [OMPI users] send_request error with allocate

2015-09-29 Thread Gilles Gouaillardet
Diego, if you invoke MPI_Waitall on three requests, and some of them have not been initialized (manually, or via MPI_Isend or MPI_Irecv), then the behavior of your program is undetermined. if you want to use array of requests (because it make the program simple) but you know not all of them are a

Re: [OMPI users] send_request error with allocate

2015-09-29 Thread Diego Avesani
ok, let me try Diego On 29 September 2015 at 16:23, Jeff Squyres (jsquyres) wrote: > This code does not appear to compile -- there's no main program, for > example. > > Can you make a small, self-contained example program that shows the > problem? > > > > On Sep 29, 2015, at 10:08 AM, Diego Av

Re: [OMPI users] send_request error with allocate

2015-09-29 Thread Jeff Squyres (jsquyres)
This code does not appear to compile -- there's no main program, for example. Can you make a small, self-contained example program that shows the problem? > On Sep 29, 2015, at 10:08 AM, Diego Avesani wrote: > > Dear Jeff, Dear all, > the code is very long, here something. I hope that this cou

Re: [OMPI users] send_request error with allocate

2015-09-29 Thread Diego Avesani
dear Jeff, dear all, I have notice that if I initialize the variables, I do not have the error anymore: ! ALLOCATE(SEND_REQUEST(nMsg),RECV_REQUEST(nMsg)) SEND_REQUEST=0 RECV_REQUEST=0 ! Could you please explain me why? Thanks Diego On 29 September 2015 at 16:08, Diego Avesani wrote: >

Re: [OMPI users] send_request error with allocate

2015-09-29 Thread Diego Avesani
Dear Jeff, Dear all, the code is very long, here something. I hope that this could help. What do you think? SUBROUTINE MATOPQN USE VARS_COMMON,ONLY:COMM_CART,send_messageR,recv_messageL,nMsg USE MPI INTEGER :: send_request(nMsg), recv_request(nMsg) INTEGER :: send_status_list(MPI_STATUS_SIZE,nMsg

Re: [OMPI users] send_request error with allocate

2015-09-28 Thread Jeff Squyres (jsquyres)
Can you send a small reproducer program? > On Sep 28, 2015, at 4:45 PM, Diego Avesani wrote: > > Dear all, > > I have to use a send_request in a MPI_WAITALL. > Here the strange things: > > If I use at the begging of the SUBROUTINE: > > INTEGER :: send_request(3), recv_request(3) > > I have

[OMPI users] send_request error with allocate

2015-09-28 Thread Diego Avesani
Dear all, I have to use a send_request in a MPI_WAITALL. Here the strange things: If I use at the begging of the SUBROUTINE: INTEGER :: send_request(3), recv_request(3) I have no problem, but if I use USE COMONVARS,ONLY : nMsg with nMsg=3 and after that I declare INTEGER :: send_request(nMsg