Abe-san,
you can be blocking on one side, and non blocking on the other side.
for example, one task can do MPI_Send, and the other MPI_Irecv and MPI_Wait.
in order to avoid deadlock, your program should do
1. master MPI_Isend and start the workers
2. worker receive and process messages (in there
Dear Gilles-san and all,
I thought MPI_Isend kept the sent data and stacked up in somewhere waiting
corresponding MPI_Irecv.
The image of my code regarding MPI,
1. send ALL tag-ed message to the other node (MPI_Isend) in master thread, then
launch worker threads and
2. receive the corresponding
Abe-san,
MPI_Isend followed by MPI_Wait is equivalent to MPI_Send
Depending on message size and inflight messages, that can deadlock if two tasks
send to each other and no recv has been posted.
Cheers,
Gilles
ABE Hiroshi wrote:
>Dear All,
>
>Installed openmpi 1.10.0 and gcc-5.2 using Fink (h
Abe-san,
Please make sure you use the same message size in your application and your
test case. Using small messages can hide some application level deadlock.
Cheers,
Gilles
ABE Hiroshi wrote:
>Dear Gilles-san,
>
>
>Thank you for your prompt reply.
>
>The code is a licenced one so I will try