In the example you cite below, it looks like you're mixing MPI_Gather and MPI_Send.

MPI_Gather is a "collective" routine; it must be called by all processes in the communicator. All processes will send a buffer/ message to the root; only the root process will receive all the buffers/messages (note that even the root sends to itself).

MPI_Send is a point-to-point function, meaning that it is paired with some flavor of MPI_Recv (there's a few different variants of MPI_Recv, like MPI_Irecv, which is a non-blocking receive).

Eugene's example is a good one -- it shows point-to-point only. It doesn't mix Gather and Recv.


On Dec 23, 2008, at 2:04 PM, Win Than Aung wrote:

Hi,
thanks for your reply. let's say i have 3 processors. I sent msg from 1st,2nd processors and want to gather in processor 0 processor. so i tried like following. it couldn't receive msg sent from processor 1 and 2.

http://www.nomorepasting.com/getpaste.php?pasteid=22985

PS: is MPI_Recv is better to receive msg from multiple processors and gather in 1 processor? or MPI_Gather is better?
thanks
winthan



On Tue, Dec 23, 2008 at 1:23 PM, Eugene Loh <eugene....@sun.com> wrote:
Win Than Aung wrote:

MPI_Recv(....) << is it possible to receive the message sent from other sources? I tried MPI_ANY_SOURCE in place of source but it doesn't work out

Yes of course. Can you send a short example of what doesn't work? The example should presumably be less than about 20 lines. Here is an example that works:

% cat a.c
#include <stdio.h>
#include <mpi.h>

int main(int argc, char **argv) {
 int np, me, sbuf = -1, rbuf = -2;

 MPI_Init(&argc,&argv);
 MPI_Comm_size(MPI_COMM_WORLD,&np);
 MPI_Comm_rank(MPI_COMM_WORLD,&me);
 if ( np < 2 ) MPI_Abort(MPI_COMM_WORLD,-1);

 if ( me == 1 ) MPI_Send(&sbuf,1,MPI_INT,0,344,MPI_COMM_WORLD);
 if ( me == 0 ) {
MPI_Recv(&rbuf,1,MPI_INT,MPI_ANY_SOURCE, 344,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
  if ( rbuf == sbuf ) printf("Send/Recv self passed\n");
  else                printf("Send/Recv self FAILED\n");
 }

 MPI_Finalize();

 return 0;
}
% mpicc a.c
% mpirun -np 2 a.out
Send/Recv self passed
%
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
Cisco Systems

Reply via email to