Hi,

I'm new to mpi and open-mpi. All in the same day is testing me. My thought is 
to compare openmpi to shared mem for ad-hoc inter-process channels on the same 
machine.

I've created a couple of little examples of a publisher and subscriber using 
MPI_Comm_accept and MPI_Comm_connect. (Ubuntu 14.04 with openmpi 1.8.1)  Send 
and recv works ok.  The travel time from process to process reports numbers 
like this from my demo:
  travel time = 46.648068 microseconds
  travel time = 34.275774 microseconds
Which is OK as it seems to be using TCP for the transport on the same machine. 

Open-mpi is certainly nice and easy to get started. Thank you for the cool 
messaging!

A simple UDP localhost channel without MPI is about 10 us on this machine.

My question is how do I get this accept/connect channel set up for shared 
memory instead of tcp? A little google got me a little lost on where to begin. 
A quick pointer where to look and read would be much appreciated.

I'm interested in how low I can push the latency down inter-process.

If I do something similar with traditional MPI static processes rather than 
dyanmic/connect it chooses sm as the transport and gives latencies as expected:
  travel time = 300.776560 nanoseconds

  travel time = 286.763106 nanoseconds

Which is quite impressive of open-mpi for my little desktop i7. Any help in 
understanding how to get similar numbers for the MPI_Comm_accept case would be 
most welcome!

Kind regards,

Amy.

_______________________
I'm playing around with: ompi-server --no-daemonize -r -

My publisher has a: 

   MPI_Comm_accept(port_name, MPI_INFO_NULL, 0, MPI_COMM_SELF, &channel );
   ...
   MPI_Send(&mess, sizeof(message_t), MPI_BYTE, 0, 0, channel);

My subscriber has a:
    MPI_Comm_connect(port_name, MPI_INFO_NULL, 0, MPI_COMM_SELF, &channel );
    ...
    MPI_Recv(buffer, 1024, MPI_BYTE, 0, 0, channel, MPI_STATUS_IGNORE );

Yes, I should use real MPI structures, but this will do for a test I hope.

The client test runs like this: 
mpirun -np 1  --ompi-server "3997171712.0;tcp://10.12.14.12:57523" 
./mpi_client_test.helper

Reply via email to