please refer to following code, which sends data to root from multiple cour
There is only one receive, so it receives only one message. When you specify the element count for the receive, you're only specifying the size of the buffer into which the message will be received. Only after the message has been received can you inquire how big the message actually was. Here is an example: % cat a.c #include <stdio.h> #include <mpi.h> int main(int argc, char **argv) { int np, me, peer, value; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&np); MPI_Comm_rank(MPI_COMM_WORLD,&me); value = me * me + 1; if ( me == 0 ) { for ( peer = 0; peer < np; peer++ ) { if ( peer != 0 ) MPI_Recv(&value,1,MPI_INT,peer,343,MPI_COMM_WORLD,MPI_STATUS_IGNORE); printf("peer %d had value %d\n", peer, value); } } else MPI_Send(&value,1,MPI_INT,0,343,MPI_COMM_WORLD); MPI_Finalize(); return 0; } % mpirun -np 3 a.out peer 0 had value 1 peer 1 had value 2 peer 2 had value 5 % Alternatively, #include <stdio.h> #include <mpi.h> #define MAXNP 1024 int main(int argc, char **argv) { int np, me, peer, value, values[MAXNP]; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&np); if ( np > MAXNP ) MPI_Abort(MPI_COMM_WORLD,-1); MPI_Comm_rank(MPI_COMM_WORLD,&me); value = me * me + 1; MPI_Gather(&value, 1, MPI_INT, values, 1, MPI_INT, 0, MPI_COMM_WORLD); if ( me == 0 ) for ( peer = 0; peer < np; peer++ ) printf("peer %d had value %d\n", peer, values[peer]); MPI_Finalize(); return 0; } % mpirun -np 3 a.out peer 0 had value 1 peer 1 had value 2 peer 2 had value 5 % On Sun, Dec 28, 2008 at 7:45 PM, Jack Bryan <dtustud...@hotmail.com> wrote: > HI, > > I need to transfer data from multiple sources to one destination. > The requirement is: > > (1) The sources and destination nodes may work asynchronously. > > (2) Each source node generates data package in their own paces. > And, there may be many packages to send. Whenever, a data package > is generated , it should be sent to the desination node at once. > And then, the source node continue to work on generating the next > package. > > (3) There is only one destination node , which must receive all data > package generated from the source nodes. > Because the source and destination nodes may work asynchronously, > the destination node should not wait for a specific source node until > the source node sends out its data. > > The destination node should be able to receive data package > from anyone source node whenever the data package is available in a > source node. > > My question is : > > What MPI function should be used to implement the protocol above ? > > If I use MPI_Send/Recv, they are blocking function. The destination > node have to wait for one node until its data is available. > > The communication overhead is too high. > > If I use MPI_Bsend, the destination node has to use MPI_Recv to , > a Blocking receive for a message . > > This can make the destination node wait for only one source node and > actually other source nodes may have data avaiable. > > > Any help or comment is appreciated !!! > > thanks > > Dec. 28 2008 > > > ------------------------------ > It's the same Hotmail(R). If by "same" you mean up to 70% faster. Get your > account > now.<http://windowslive.com/online/hotmail?ocid=TXT_TAGLM_WL_hotmail_acq_broad1_122008> > > _______________________________________________ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users >