[OMPI users] help with mpi

2008-12-23 Thread Win Than Aung
mpirun -np 4 ./hello
running mpdallexit on steele-a137.rcac.purdue.edu
LAUNCHED mpd on steele-a137.rcac.purdue.edu  via
RUNNING: mpd on steele-a137.rcac.purdue.edu
steele-a137.rcac.purdue.edu_36959 (172.18.24.147)
time for 100 loops = 2.98023223877e-05 seconds
too few entries in machinefile


i put cout<<"sth " in hello.cpp but it doesn't get displayed
help!


[OMPI users] sending message to the source(0) from other processors

2008-12-23 Thread Win Than Aung
MPI_Recv() << is it possible to receive the message sent from other
sources? I tried MPI_ANY_SOURCE in place of source but it doesn't work out
thanks


Re: [OMPI users] sending message to the source(0) from other processors

2008-12-23 Thread Win Than Aung
Hi,thanks for your reply. let's say i have 3 processors. I sent msg from
1st,2nd processors and want to gather in processor 0 processor. so i tried
like following. it couldn't receive msg sent from processor 1 and 2.

http://www.nomorepasting.com/getpaste.php?pasteid=22985

PS: is MPI_Recv is better to receive msg from multiple processors and gather
in 1 processor? or MPI_Gather is better?
thanks
winthan



On Tue, Dec 23, 2008 at 1:23 PM, Eugene Loh  wrote:

> Win Than Aung wrote:
>
>  MPI_Recv() << is it possible to receive the message sent from other
>> sources? I tried MPI_ANY_SOURCE in place of source but it doesn't work out
>>
>
> Yes of course.  Can you send a short example of what doesn't work?  The
> example should presumably be less than about 20 lines.  Here is an example
> that works:
>
> % cat a.c
> #include 
> #include 
>
> int main(int argc, char **argv) {
>  int np, me, sbuf = -1, rbuf = -2;
>
>  MPI_Init(&argc,&argv);
>  MPI_Comm_size(MPI_COMM_WORLD,&np);
>  MPI_Comm_rank(MPI_COMM_WORLD,&me);
>  if ( np < 2 ) MPI_Abort(MPI_COMM_WORLD,-1);
>
>  if ( me == 1 ) MPI_Send(&sbuf,1,MPI_INT,0,344,MPI_COMM_WORLD);
>  if ( me == 0 ) {
>
> MPI_Recv(&rbuf,1,MPI_INT,MPI_ANY_SOURCE,344,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
>   if ( rbuf == sbuf ) printf("Send/Recv self passed\n");
>   elseprintf("Send/Recv self FAILED\n");
>  }
>
>  MPI_Finalize();
>
>  return 0;
> }
> % mpicc a.c
> % mpirun -np 2 a.out
> Send/Recv self passed
> %
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] sending message to the source(0) from other processors

2008-12-23 Thread Win Than Aung
PS: extra question qsub -I -q standby -l select=1:ncpus=8
mpirun -np 4 ./hello
running mpdallexit on steele-a137.rcac.purdue.edu
LAUNCHED mpd on steele-a137.rcac.purdue.edu  via
RUNNING: mpd on steele-a137.rcac.purdue.edu
steele-a137.rcac.purdue.edu_36959 (172.18.24.147)
time for 100 loops = 2.98023223877e-05 seconds
too few entries in machinefile

the mpi program supposed to print 4 hello msg since there r four processors.
but for some reasons, it doesn't print
thanks
winthan


On Tue, Dec 23, 2008 at 1:23 PM, Eugene Loh  wrote:

> Win Than Aung wrote:
>
>  MPI_Recv() << is it possible to receive the message sent from other
>> sources? I tried MPI_ANY_SOURCE in place of source but it doesn't work out
>>
>
> Yes of course.  Can you send a short example of what doesn't work?  The
> example should presumably be less than about 20 lines.  Here is an example
> that works:
>
> % cat a.c
> #include 
> #include 
>
> int main(int argc, char **argv) {
>  int np, me, sbuf = -1, rbuf = -2;
>
>  MPI_Init(&argc,&argv);
>  MPI_Comm_size(MPI_COMM_WORLD,&np);
>  MPI_Comm_rank(MPI_COMM_WORLD,&me);
>  if ( np < 2 ) MPI_Abort(MPI_COMM_WORLD,-1);
>
>  if ( me == 1 ) MPI_Send(&sbuf,1,MPI_INT,0,344,MPI_COMM_WORLD);
>  if ( me == 0 ) {
>
> MPI_Recv(&rbuf,1,MPI_INT,MPI_ANY_SOURCE,344,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
>   if ( rbuf == sbuf ) printf("Send/Recv self passed\n");
>   elseprintf("Send/Recv self FAILED\n");
>  }
>
>  MPI_Finalize();
>
>  return 0;
> }
> % mpicc a.c
> % mpirun -np 2 a.out
> Send/Recv self passed
> %
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] sending message to the source(0) from other processors

2008-12-23 Thread Win Than Aung
thanks for your reply jeff
so i tried following



#include 
#include 

int main(int argc, char **argv) {
 int np, me, sbuf = -1, rbuf = -2,mbuf=1000;
int data[2];
 MPI_Init(&argc,&argv);
 MPI_Comm_size(MPI_COMM_WORLD,&np);
 MPI_Comm_rank(MPI_COMM_WORLD,&me);
 if ( np < 2 ) MPI_Abort(MPI_COMM_WORLD,-1);

 if ( me == 1 ) MPI_Send(&sbuf,1,MPI_INT,0,344,MPI_COMM_WORLD);
if(me==2) MPI_Send( &mbuf,1,MPI_INT,0,344,MPI_COMM_WORLD);
if ( me == 0 ) {

MPI_Recv(data,2,MPI_INT,MPI_ANY_SOURCE,344,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
 }

 MPI_Finalize();

 return 0;
}

it can successfuly receive the one sent from processor 1(me==1) but it
failed to receive the one sent from processor 2(me==2)
mpirun -np 3 hello


thanks
winthan
On Tue, Dec 23, 2008 at 1:15 PM, Jeff Squyres  wrote:

> This looks like a question for the MPICH2 developers.
>
> Specifically, it looks like you are using MPICH2, not Open MPI.  They are
> entirely different software packages maintained by different people -- we're
> not really qualified to answer questions about MPICH2.  The top-level API is
> the same between the two (meaning that you can compile and run your app in
> either Open MPI or MPICH2), but that's where the similarities end.
>
>
>
> On Dec 23, 2008, at 2:07 PM, Win Than Aung wrote:
>
>  PS: extra question
>> qsub -I -q standby -l select=1:ncpus=8
>> mpirun -np 4 ./hello
>> running mpdallexit on steele-a137.rcac.purdue.edu
>> LAUNCHED mpd on steele-a137.rcac.purdue.edu  via
>> RUNNING: mpd on steele-a137.rcac.purdue.edu
>> steele-a137.rcac.purdue.edu_36959 (172.18.24.147)
>> time for 100 loops = 2.98023223877e-05 seconds
>> too few entries in machinefile
>>
>> the mpi program supposed to print 4 hello msg since there r four
>> processors.
>> but for some reasons, it doesn't print
>> thanks
>> winthan
>>
>>
>> On Tue, Dec 23, 2008 at 1:23 PM, Eugene Loh  wrote:
>> Win Than Aung wrote:
>>
>> MPI_Recv() << is it possible to receive the message sent from other
>> sources? I tried MPI_ANY_SOURCE in place of source but it doesn't work out
>>
>> Yes of course.  Can you send a short example of what doesn't work?  The
>> example should presumably be less than about 20 lines.  Here is an example
>> that works:
>>
>> % cat a.c
>> #include 
>> #include 
>>
>> int main(int argc, char **argv) {
>>  int np, me, sbuf = -1, rbuf = -2;
>>
>>  MPI_Init(&argc,&argv);
>>  MPI_Comm_size(MPI_COMM_WORLD,&np);
>>  MPI_Comm_rank(MPI_COMM_WORLD,&me);
>>  if ( np < 2 ) MPI_Abort(MPI_COMM_WORLD,-1);
>>
>>  if ( me == 1 ) MPI_Send(&sbuf,1,MPI_INT,0,344,MPI_COMM_WORLD);
>>  if ( me == 0 ) {
>>
>>  
>> MPI_Recv(&rbuf,1,MPI_INT,MPI_ANY_SOURCE,344,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
>>  if ( rbuf == sbuf ) printf("Send/Recv self passed\n");
>>  elseprintf("Send/Recv self FAILED\n");
>>  }
>>
>>  MPI_Finalize();
>>
>>  return 0;
>> }
>> % mpicc a.c
>> % mpirun -np 2 a.out
>> Send/Recv self passed
>> %
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
>
> --
> Jeff Squyres
> Cisco Systems
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] sending message to the source(0) from other processors

2008-12-24 Thread Win Than Aung
thanks Eugene for your example, it helps me a lot.I bump into one more
problems
lets say I have the file content as follow
I have total of six files which all contain real and imaginary value.
"
1.001212 1.0012121  //0th
1.001212 1.0012121  //1st
1.001212 1.0012121  //2nd
1.001212 1.0012121  //3rd
1.001212 1.0012121  //4th
1.001212 1.0012121 //5th
1.001212 1.0012121 //6th
"
char send_buffer[1000];
i use "mpirun -np 6 a.out" which mean i let each processor get access to one
file
each processor will add "0th and 2nd"(even values) (those values will be
sent to root processor and save as file_even_add.dat" and also each
processor will add "1st and 3rd"(odd values) (those values will be sent to
root processor(here is 0) and saved as "file_odd_add.dat".

if(mpi_my_id == root)
{

}






On Tue, Dec 23, 2008 at 3:53 PM, Eugene Loh  wrote:

>  Win Than Aung wrote:
>
> thanks for your reply jeff
>  so i tried following
>
>
>
>  #include 
> #include 
>
> int main(int argc, char **argv) {
>  int np, me, sbuf = -1, rbuf = -2,mbuf=1000;
> int data[2];
>  MPI_Init(&argc,&argv);
>  MPI_Comm_size(MPI_COMM_WORLD,&np);
>  MPI_Comm_rank(MPI_COMM_WORLD,&me);
>  if ( np < 2 ) MPI_Abort(MPI_COMM_WORLD,-1);
>
>  if ( me == 1 ) MPI_Send(&sbuf,1,MPI_INT,0,344,MPI_COMM_WORLD);
> if(me==2) MPI_Send( &mbuf,1,MPI_INT,0,344,MPI_COMM_WORLD);
> if ( me == 0 ) {
>
> MPI_Recv(data,2,MPI_INT,MPI_ANY_SOURCE,344,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
>  }
>
>  MPI_Finalize();
>
>  return 0;
> }
>
> it can successfuly receive the one sent from processor 1(me==1) but it
> failed to receive the one sent from processor 2(me==2)
> mpirun -np 3 hello
>
> There is only one receive, so it receives only one message.  When you
> specify the element count for the receive, you're only specifying the size
> of the buffer into which the message will be received.  Only after the
> message has been received can you inquire how big the message actually was.
>
> Here is an example:
>
> % cat a.c
> #include 
> #include 
>
> int main(int argc, char **argv) {
>   int np, me, peer, value;
>
>   MPI_Init(&argc,&argv);
>   MPI_Comm_size(MPI_COMM_WORLD,&np);
>   MPI_Comm_rank(MPI_COMM_WORLD,&me);
>
>   value = me * me + 1;
>   if ( me == 0 ) {
> for ( peer = 0; peer < np; peer++ ) {
>   if ( peer != 0 )
> MPI_Recv(&value,1,MPI_INT,peer,343,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
>   printf("peer %d had value %d\n", peer, value);
> }
>   }
>   else MPI_Send(&value,1,MPI_INT,0,343,MPI_COMM_WORLD);
>
>   MPI_Finalize();
>
>   return 0;
> }
> % mpirun -np 3 a.out
> peer 0 had value 1
> peer 1 had value 2
> peer 2 had value 5
> %
>
> Alternatively,
>
> #include 
> #include 
>
> #define MAXNP 1024
> int main(int argc, char **argv) {
>   int np, me, peer, value, values[MAXNP];
>
>   MPI_Init(&argc,&argv);
>   MPI_Comm_size(MPI_COMM_WORLD,&np);
>   if ( np > MAXNP ) MPI_Abort(MPI_COMM_WORLD,-1);
>   MPI_Comm_rank(MPI_COMM_WORLD,&me);
>   value = me * me + 1;
>
>   MPI_Gather(&value, 1, MPI_INT,
>  values, 1, MPI_INT, 0, MPI_COMM_WORLD);
>
>   if ( me == 0 )
> for ( peer = 0; peer < np; peer++ )
>   printf("peer %d had value %d\n", peer, values[peer]);
>
>   MPI_Finalize();
>   return 0;
> }
> % mpirun -np 3 a.out
> peer 0 had value 1
> peer 1 had value 2
> peer 2 had value 5
> %
>
> Which is better?  Up to you.  The collective routines (like MPI_Gather) do
> offer MPI implementors (like people developing Open MPI) the opportunity to
> perform special optimizations (e.g., gather using a binary tree instead of
> having the root process perform so many receives).
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] sending message to the source(0) from other processors

2008-12-24 Thread Win Than Aung
thanks Eugene for your example, it helps me a lot.I bump into one more
problems
lets say I have the file content as follow
I have total of six files which all contain real and imaginary value.
"
1.001212 1.0012121  //0th
1.001212 1.0012121  //1st
1.001212 1.0012121  //2nd
1.001212 1.0012121  //3rd
1.001212 1.0012121  //4th
1.001212 1.0012121 //5th
1.001212 1.0012121 //6th
"
char send_buffer[1000];
i use "mpirun -np 6 a.out" which mean i let each processor get access to one
file
each processor will add "0th and 2nd"(even values) (those values will be
sent to root processor and save as file_even_add.dat" and also each
processor will add "1st and 3rd"(odd values) (those values will be sent to
root processor(here is 0) and saved as "file_odd_add.dat".

char recv_buffer[1000];
File* file_even_dat;
File* file_odd_dat;
if(mpi_my_id == root)
{
   filepteven = fopen("C:\\fileeven.dat");
   fileptodd = fopen("C:\\fileodd.dat");
 int peer =0;
for(peer =0;peer wrote:

>  Win Than Aung wrote:
>
> thanks for your reply jeff
>  so i tried following
>
>
>
>  #include 
> #include 
>
> int main(int argc, char **argv) {
>  int np, me, sbuf = -1, rbuf = -2,mbuf=1000;
> int data[2];
>  MPI_Init(&argc,&argv);
>  MPI_Comm_size(MPI_COMM_WORLD,&np);
>  MPI_Comm_rank(MPI_COMM_WORLD,&me);
>  if ( np < 2 ) MPI_Abort(MPI_COMM_WORLD,-1);
>
>  if ( me == 1 ) MPI_Send(&sbuf,1,MPI_INT,0,344,MPI_COMM_WORLD);
> if(me==2) MPI_Send( &mbuf,1,MPI_INT,0,344,MPI_COMM_WORLD);
> if ( me == 0 ) {
>
> MPI_Recv(data,2,MPI_INT,MPI_ANY_SOURCE,344,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
>  }
>
>  MPI_Finalize();
>
>  return 0;
> }
>
> it can successfuly receive the one sent from processor 1(me==1) but it
> failed to receive the one sent from processor 2(me==2)
> mpirun -np 3 hello
>
> There is only one receive, so it receives only one message.  When you
> specify the element count for the receive, you're only specifying the size
> of the buffer into which the message will be received.  Only after the
> message has been received can you inquire how big the message actually was.
>
> Here is an example:
>
> % cat a.c
> #include 
> #include 
>
> int main(int argc, char **argv) {
>   int np, me, peer, value;
>
>   MPI_Init(&argc,&argv);
>   MPI_Comm_size(MPI_COMM_WORLD,&np);
>   MPI_Comm_rank(MPI_COMM_WORLD,&me);
>
>   value = me * me + 1;
>   if ( me == 0 ) {
> for ( peer = 0; peer < np; peer++ ) {
>   if ( peer != 0 )
> MPI_Recv(&value,1,MPI_INT,peer,343,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
>   printf("peer %d had value %d\n", peer, value);
> }
>   }
>   else MPI_Send(&value,1,MPI_INT,0,343,MPI_COMM_WORLD);
>
>   MPI_Finalize();
>
>   return 0;
> }
> % mpirun -np 3 a.out
> peer 0 had value 1
> peer 1 had value 2
> peer 2 had value 5
> %
>
> Alternatively,
>
> #include 
> #include 
>
> #define MAXNP 1024
> int main(int argc, char **argv) {
>   int np, me, peer, value, values[MAXNP];
>
>   MPI_Init(&argc,&argv);
>   MPI_Comm_size(MPI_COMM_WORLD,&np);
>   if ( np > MAXNP ) MPI_Abort(MPI_COMM_WORLD,-1);
>   MPI_Comm_rank(MPI_COMM_WORLD,&me);
>   value = me * me + 1;
>
>   MPI_Gather(&value, 1, MPI_INT,
>  values, 1, MPI_INT, 0, MPI_COMM_WORLD);
>
>   if ( me == 0 )
> for ( peer = 0; peer < np; peer++ )
>   printf("peer %d had value %d\n", peer, values[peer]);
>
>   MPI_Finalize();
>   return 0;
> }
> % mpirun -np 3 a.out
> peer 0 had value 1
> peer 1 had value 2
> peer 2 had value 5
> %
>
> Which is better?  Up to you.  The collective routines (like MPI_Gather) do
> offer MPI implementors (like people developing Open MPI) the opportunity to
> perform special optimizations (e.g., gather using a binary tree instead of
> having the root process perform so many receives).
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] sending message to the source(0) from other processors

2008-12-24 Thread Win Than Aung
I got the solution. I just need to set the appropriate tag to send and
receive.sorry for asking
thanks
winthan

On Wed, Dec 24, 2008 at 10:36 PM, Win Than Aung  wrote:

> thanks Eugene for your example, it helps me a lot.I bump into one more
> problems
> lets say I have the file content as follow
> I have total of six files which all contain real and imaginary value.
> "
> 1.001212 1.0012121  //0th
> 1.001212 1.0012121  //1st
> 1.001212 1.0012121  //2nd
> 1.001212 1.0012121  //3rd
> 1.001212 1.0012121  //4th
> 1.001212 1.0012121 //5th
> 1.001212 1.0012121 //6th
> "
> char send_buffer[1000];
> i use "mpirun -np 6 a.out" which mean i let each processor get access to
> one file
> each processor will add "0th and 2nd"(even values) (those values will be
> sent to root processor and save as file_even_add.dat" and also each
> processor will add "1st and 3rd"(odd values) (those values will be sent to
> root processor(here is 0) and saved as "file_odd_add.dat".
>
> char recv_buffer[1000];
> File* file_even_dat;
> File* file_odd_dat;
> if(mpi_my_id == root)
> {
>filepteven = fopen("C:\\fileeven.dat");
>fileptodd = fopen("C:\\fileodd.dat");
>  int peer =0;
> for(peer =0;peer{
>   if(peer!=root)
>   {
>
> MPI_Recv(recv_buffer,MAX_STR_LEN,MPI_BYTE,MPI_ANY_TAG,MPI_COMM_WORLD,&status);
>   }
>   fprintf(filepteven, "%s \n" ,recv_buffer);
>}
> }
>
> My question is how do i know which sentbuffer has even add values and which
> sentbuffer has odd add values? in which order did they get sent?
> thanks
> winthan
>
> On Tue, Dec 23, 2008 at 3:53 PM, Eugene Loh  wrote:
>
>>  Win Than Aung wrote:
>>
>> thanks for your reply jeff
>>  so i tried following
>>
>>
>>
>>  #include 
>> #include 
>>
>> int main(int argc, char **argv) {
>>  int np, me, sbuf = -1, rbuf = -2,mbuf=1000;
>> int data[2];
>>  MPI_Init(&argc,&argv);
>>  MPI_Comm_size(MPI_COMM_WORLD,&np);
>>  MPI_Comm_rank(MPI_COMM_WORLD,&me);
>>  if ( np < 2 ) MPI_Abort(MPI_COMM_WORLD,-1);
>>
>>  if ( me == 1 ) MPI_Send(&sbuf,1,MPI_INT,0,344,MPI_COMM_WORLD);
>> if(me==2) MPI_Send( &mbuf,1,MPI_INT,0,344,MPI_COMM_WORLD);
>> if ( me == 0 ) {
>>
>> MPI_Recv(data,2,MPI_INT,MPI_ANY_SOURCE,344,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
>>  }
>>
>>  MPI_Finalize();
>>
>>  return 0;
>> }
>>
>> it can successfuly receive the one sent from processor 1(me==1) but it
>> failed to receive the one sent from processor 2(me==2)
>> mpirun -np 3 hello
>>
>> There is only one receive, so it receives only one message.  When you
>> specify the element count for the receive, you're only specifying the size
>> of the buffer into which the message will be received.  Only after the
>> message has been received can you inquire how big the message actually was.
>>
>> Here is an example:
>>
>> % cat a.c
>> #include 
>> #include 
>>
>> int main(int argc, char **argv) {
>>   int np, me, peer, value;
>>
>>   MPI_Init(&argc,&argv);
>>   MPI_Comm_size(MPI_COMM_WORLD,&np);
>>   MPI_Comm_rank(MPI_COMM_WORLD,&me);
>>
>>   value = me * me + 1;
>>   if ( me == 0 ) {
>> for ( peer = 0; peer < np; peer++ ) {
>>   if ( peer != 0 )
>> MPI_Recv(&value,1,MPI_INT,peer,343,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
>>   printf("peer %d had value %d\n", peer, value);
>> }
>>   }
>>   else MPI_Send(&value,1,MPI_INT,0,343,MPI_COMM_WORLD);
>>
>>   MPI_Finalize();
>>
>>   return 0;
>> }
>> % mpirun -np 3 a.out
>> peer 0 had value 1
>> peer 1 had value 2
>> peer 2 had value 5
>> %
>>
>> Alternatively,
>>
>> #include 
>> #include 
>>
>> #define MAXNP 1024
>> int main(int argc, char **argv) {
>>   int np, me, peer, value, values[MAXNP];
>>
>>   MPI_Init(&argc,&argv);
>>   MPI_Comm_size(MPI_COMM_WORLD,&np);
>>   if ( np > MAXNP ) MPI_Abort(MPI_COMM_WORLD,-1);
>>   MPI_Comm_rank(MPI_COMM_WORLD,&me);
>>   value = me * me + 1;
>>
>>   MPI_Gather(&value, 1, MPI_INT,
>>  values, 1, MPI_INT, 0, MPI_COMM_WORLD);
>>
>>   if ( me == 0 )
>> for ( peer = 0; peer < np; peer++ )
>>   printf("peer %d had value %d\n", peer, values[peer]);
>>
>>   MPI_Finalize();
>>   return 0;
>> }
>> % mpirun -np 3 a.out
>> peer 0 had value 1
>> peer 1 had value 2
>> peer 2 had value 5
>> %
>>
>> Which is better?  Up to you.  The collective routines (like MPI_Gather) do
>> offer MPI implementors (like people developing Open MPI) the opportunity to
>> perform special optimizations (e.g., gather using a binary tree instead of
>> having the root process perform so many receives).
>>
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
>


Re: [OMPI users] openMPI, transfer data from multiple sources to one destination

2008-12-28 Thread Win Than Aung
please refer to following code, which sends data to root from multiple cour


There is only one receive, so it receives only one message.  When you
specify the element count for the receive, you're only specifying the size
of the buffer into which the message will be received.  Only after the
message has been received can you inquire how big the message actually was.

Here is an example:

% cat a.c
#include 
#include 

int main(int argc, char **argv) {
  int np, me, peer, value;

  MPI_Init(&argc,&argv);
  MPI_Comm_size(MPI_COMM_WORLD,&np);
  MPI_Comm_rank(MPI_COMM_WORLD,&me);

  value = me * me + 1;
  if ( me == 0 ) {
for ( peer = 0; peer < np; peer++ ) {
  if ( peer != 0 )
MPI_Recv(&value,1,MPI_INT,peer,343,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
  printf("peer %d had value %d\n", peer, value);
}
  }
  else MPI_Send(&value,1,MPI_INT,0,343,MPI_COMM_WORLD);

  MPI_Finalize();

  return 0;
}
% mpirun -np 3 a.out
peer 0 had value 1
peer 1 had value 2
peer 2 had value 5
%

Alternatively,

#include 
#include 

#define MAXNP 1024
int main(int argc, char **argv) {
  int np, me, peer, value, values[MAXNP];

  MPI_Init(&argc,&argv);
  MPI_Comm_size(MPI_COMM_WORLD,&np);
  if ( np > MAXNP ) MPI_Abort(MPI_COMM_WORLD,-1);
  MPI_Comm_rank(MPI_COMM_WORLD,&me);
  value = me * me + 1;

  MPI_Gather(&value, 1, MPI_INT,
 values, 1, MPI_INT, 0, MPI_COMM_WORLD);

  if ( me == 0 )
for ( peer = 0; peer < np; peer++ )
  printf("peer %d had value %d\n", peer, values[peer]);

  MPI_Finalize();
  return 0;
}
% mpirun -np 3 a.out
peer 0 had value 1
peer 1 had value 2
peer 2 had value 5
%
On Sun, Dec 28, 2008 at 7:45 PM, Jack Bryan  wrote:

>  HI,
>
> I need to transfer data from multiple sources to one destination.
> The requirement is:
>
> (1) The sources and destination nodes may work asynchronously.
>
> (2) Each source node generates data package in their own paces.
> And, there may be many packages to send. Whenever, a data package
> is generated , it should be sent to the desination node at once.
> And then, the source node continue to work on generating the next
> package.
>
> (3) There is only one destination node , which must receive all data
> package generated from the source nodes.
> Because the source and destination nodes may work asynchronously,
> the destination node should not wait for a specific source node until
> the source node sends out its data.
>
> The destination node should be able to receive data package
> from anyone source node whenever the data package is available in a
> source node.
>
> My question is :
>
> What MPI function should be used to implement the protocol above ?
>
> If I use MPI_Send/Recv, they are blocking function. The destination
> node have to wait for one node until its data is available.
>
> The communication overhead is too high.
>
> If I use MPI_Bsend, the destination node has to use MPI_Recv to ,
> a Blocking receive for a message .
>
> This can make the destination node wait for only one source node and
> actually other source nodes may have data avaiable.
>
>
> Any help or comment is appreciated !!!
>
> thanks
>
> Dec. 28 2008
>
>
> --
> It's the same Hotmail(R). If by "same" you mean up to 70% faster. Get your
> account 
> now.
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>