Hi Guys, 
I'm working on a client/server application on windows 7, where both the client 
and the server have a multithreaded architecture, more precisely, three threads 
per application, the first for receiving messages, the 2nd for analysing them 
and processing and the 3rd for sending answers. when I implemented this 
architecture, sometimes I have deadlocks, I don't know why? the code of the two 
threads that uses MPI is below, I think the problem is the the blocking callls, 
So, is there any alternatives? what I tried to do is make the whole connection 
session uninterruptible but i couldn't find a way to do it.
Receiver thread:forever {               // wait up for a new message
                emit WriteLine("waiting for connections on <" + 
QString::fromStdString(myPort) + ">\n");                MPI_Comm_accept( 
myPort, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &client);                            
MPI_Recv(&message, 1, MessageType, MPI_ANY_SOURCE, MPI_ANY_TAG, client, 
&status);
                DIH.InFifo.Insert(message);    //insert the message into the 
fifo               emit WriteLine("A messaage 
"+QString::number(message.Command, 10)+" has been received from 
<"+QString::fromStdString(message.portSource)+">\n");
                //disconnect from the current client            
MPI_Barrier(client);            MPI_Comm_disconnect(&client);   }
Sender Thread:forever   {               message = DIH.OutFifo.Mov(); // to 
remove the message from the Fifo
                // send a new message           emit WriteLine("trying to 
connect to <" + QString::fromStdString(message.portDest) + ">\n");            
MPI_Comm_connect(message.portDest, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &client);
                MPI_Send(&message, 1, MessageType, 0, 0, client);               
emit WriteLine("A messaage "+QString::number(message.Command, 10)+" has been 
sent to <"+QString::fromStdString(message.portDest)+">\n");
                //disconnect from the current client            
MPI_Barrier(client);            MPI_Comm_disconnect(&client);   }               
                          

Reply via email to