Shaun Jackman wrote:

For my MPI application, each process reads a file and for each line sends a message (MPI_Send) to one of the other processes determined by the contents of that line. Each process posts a single MPI_Irecv and uses MPI_Request_get_status to test for a received message. If a message has been received, it processes the message and posts a new MPI_Irecv. I believe this situation is not safe and prone to deadlock since MPI_Send may block. The receiver would need to post as many MPI_Irecv as messages it expects to receive, but it does not know in advance how many messages to expect from the other processes. How is this situation usually handled in an MPI appliation where the number of messages to receive is unknown?

In a non-MPI network program I would create one thread for receiving and processing, and one thread for transmitting. Is threading a good solution? Is there a simpler solution?

Under what conditions will MPI_Send block and under what conditions will it definitely not block?

I haven't seen any other responses, so I'll try.

The conditions under which MPI_Send will block are implementation dependent. Even for a particular implementation, the conditions may be tricky to describe -- e.g., what interconnect is being used to reach the peer, is there any congestion, etc. I guess you could use buffered sends... or maybe you can't if you really don't know how many messages will be sent.

Let's just assume they'll block.

I'm unsure what overall design you're suggesting, so I'll suggest something.

Each process posts an MPI_Irecv to listen for in-coming messages.

Each process enters a loop in which it reads its file and sends out messages. Within this loop, you also loop on MPI_Test to see if any message has arrived. If so, process it, post another MPI_Irecv(), and keep polling. (I'd use MPI_Test rather than MPI_Request_get_status since you'll have to call something like MPI_Test anyhow to complete the receive.)

Once you've posted all your sends, send out a special message to indicate you're finished. I'm thinking of some sort of tree fan-in/fan-out barrier so that everyone will know when everyone is finished.

Keep polling on MPI_Test, processing further receives or advancing your fan-in/fan-out barrier.

So, the key ingredients are:

*) keep polling on MPI_Test and reposting MPI_Irecv calls to drain in-coming messages while you're still in your "send" phase *) have another mechanism for processes to notify one another when they've finished their send phases

Reply via email to