Dear Jeff, Dear Gilles, Dear All,
now is all more clear.
I use CALL MPI_ISEND and CALL MPI_IRECV. Each CPU send once and revive
once, this implies that I have REQUEST(2) for WAITALL. However, sometimes
dome CPU does not send or receive anything, so I have to set REQUEST
= MPI_REQUEST_NULL in order
On Sep 30, 2015, at 4:41 PM, Diego Avesani wrote:
>
> Dear Gilles,
> sorry to ask you again and to be frustrating,
> basically is this what I shall do for each CPU:
>
> CALL MPI_ISEND(send_messageL, MsgLength, MPI_DOUBLE_COMPLEX, MPIdata%rank-1,
> MPIdata%rank, MPI_COMM_WORLD, REQUEST(1), MPIda
Dear Gilles,
sorry to ask you again and to be frustrating,
basically is this what I shall do for each CPU:
CALL MPI_ISEND(send_messageL, MsgLength, MPI_DOUBLE_COMPLEX,
MPIdata%rank-1, MPIdata%rank, MPI_COMM_WORLD, REQUEST(1), MPIdata%iErr)
CALL MPI_IRECV(recv_messageR, MsgLength, MPI_DOUBLE_COMP
Diego,
there is some confusion here...
MPI_Waitall is not a collective operations, and a given tasks can only wait
the requests it initiated.
bottom line, each task does exactly one send and one recv, right ?
in this case, you want to have an array of two requests, isend with the
first element an
Do you have some suggestions? Is there any possibilities to use not a
vector as send_request and at the same time to have a WAIT?
regarding the code, you are perfectly right, I hope to improve in future
Thanks again
Diego
On 30 September 2015 at 16:50, Jeff Squyres (jsquyres)
wrote:
> I don'
I don't think that this pattern was obvious from the code snippet you sent,
which is why I asked for a small, self-contained reproducer. :-)
I don't know offhand how send_request(:) will be passed to C.
> On Sep 30, 2015, at 10:16 AM, Diego Avesani wrote:
>
> Dear all,
> thank for the explan
Dear all,
thank for the explanation, but something is not clear to me.
I have 4 CPUs. I use only three of them to send, let say:
CPU 0 send to CPU 1
CPU 1 send to CPU 2
CPU 2 send to CPU 3
only three revive, let's say;
CPU 1 from CPU 0
CPU 2 from CPU 1
CPU 3 from CPU 2
so I use ALLOCATE(send_requ
Put differently:
- You have an array of N requests
- If you're only filling up M of them (where N On Sep 30, 2015, at 3:43 AM, Diego Avesani wrote:
>
> Dear Gilles, Dear All,
>
> What do you mean that the array of requests has to be initialize via
> MPI_Isend or MPI_Irecv?
>
> In my code I us
Diego,
if you invoke 3 times Isend with three different send__request() and sane
thing for Irecv, then you do not have to worry about MPI_REQUEST_NULL
Based on your snippet, there could be an issue on ranks 0 and n-1,
also the index of send_request is MPIdata%rank+1
if MPIdata%rank is MPi_Comm_ra
Dear Gilles, Dear All,
What do you mean that the array of requests has to be initialize via
MPI_Isend or MPI_Irecv?
In my code I use three times MPI_Isend and MPI_Irecv so I have
a send_request(3). According to this, do I have to use MPI_REQUEST_NULL?
In the meantime I check my code
Thanks
Di
Diego,
if you invoke MPI_Waitall on three requests, and some of them have not been
initialized
(manually, or via MPI_Isend or MPI_Irecv), then the behavior of your
program is undetermined.
if you want to use array of requests (because it make the program simple)
but you know not all of them are a
ok,
let me try
Diego
On 29 September 2015 at 16:23, Jeff Squyres (jsquyres)
wrote:
> This code does not appear to compile -- there's no main program, for
> example.
>
> Can you make a small, self-contained example program that shows the
> problem?
>
>
> > On Sep 29, 2015, at 10:08 AM, Diego Av
This code does not appear to compile -- there's no main program, for example.
Can you make a small, self-contained example program that shows the problem?
> On Sep 29, 2015, at 10:08 AM, Diego Avesani wrote:
>
> Dear Jeff, Dear all,
> the code is very long, here something. I hope that this cou
dear Jeff, dear all,
I have notice that if I initialize the variables, I do not have the error
anymore:
!
ALLOCATE(SEND_REQUEST(nMsg),RECV_REQUEST(nMsg))
SEND_REQUEST=0
RECV_REQUEST=0
!
Could you please explain me why?
Thanks
Diego
On 29 September 2015 at 16:08, Diego Avesani
wrote:
>
Dear Jeff, Dear all,
the code is very long, here something. I hope that this could help.
What do you think?
SUBROUTINE MATOPQN
USE VARS_COMMON,ONLY:COMM_CART,send_messageR,recv_messageL,nMsg
USE MPI
INTEGER :: send_request(nMsg), recv_request(nMsg)
INTEGER ::
send_status_list(MPI_STATUS_SIZE,nMsg
Can you send a small reproducer program?
> On Sep 28, 2015, at 4:45 PM, Diego Avesani wrote:
>
> Dear all,
>
> I have to use a send_request in a MPI_WAITALL.
> Here the strange things:
>
> If I use at the begging of the SUBROUTINE:
>
> INTEGER :: send_request(3), recv_request(3)
>
> I have
Dear all,
I have to use a send_request in a MPI_WAITALL.
Here the strange things:
If I use at the begging of the SUBROUTINE:
INTEGER :: send_request(3), recv_request(3)
I have no problem, but if I use
USE COMONVARS,ONLY : nMsg
with nMsg=3
and after that I declare
INTEGER :: send_request(nMsg
17 matches
Mail list logo