FWIW: you might want to take an MPI tutorial; they're really helpful for learning MPI's capabilities and how to use the primitives. The NCSA has 2 excellent MPI tutorials (intro and advanced); they both require free registration:

    http://ci-tutor.ncsa.uiuc.edu/login.php


On Dec 24, 2008, at 10:52 PM, Win Than Aung wrote:

I got the solution. I just need to set the appropriate tag to send and receive.
sorry for asking
thanks
winthan

On Wed, Dec 24, 2008 at 10:36 PM, Win Than Aung <keshu...@gmail.com> wrote:
thanks Eugene for your example, it helps me a lot.
I bump into one more problems
lets say I have the file content as follow
I have total of six files which all contain real and imaginary value.
"
1.001212     1.0012121  //0th
1.001212     1.0012121  //1st
1.001212     1.0012121  //2nd
1.001212     1.0012121  //3rd
1.001212     1.0012121  //4th
1.001212     1.0012121 //5th
1.001212     1.0012121 //6th
"
char send_buffer[1000];
i use "mpirun -np 6 a.out" which mean i let each processor get access to one file each processor will add "0th and 2nd"(even values) (those values will be sent to root processor and save as file_even_add.dat" and also each processor will add "1st and 3rd"(odd values) (those values will be sent to root processor(here is 0) and saved as "file_odd_add.dat".

char recv_buffer[1000];
File* file_even_dat;
File* file_odd_dat;
if(mpi_my_id == root)
{
   filepteven = fopen("C:\\fileeven.dat");
   fileptodd = fopen("C:\\fileodd.dat");
     int peer =0;
    for(peer =0;peer<mpi_total_size;peer++)
   {
              if(peer!=root)
              {
MPI_Recv (recv_buffer,MAX_STR_LEN,MPI_BYTE,MPI_ANY_TAG,MPI_COMM_WORLD,&status);
              }
              fprintf(filepteven, "%s \n" ,recv_buffer);
   }
}

My question is how do i know which sentbuffer has even add values and which sentbuffer has odd add values? in which order did they get sent?
thanks
winthan

On Tue, Dec 23, 2008 at 3:53 PM, Eugene Loh <eugene....@sun.com> wrote:
Win Than Aung wrote:

thanks for your reply jeff

so i tried following



#include <stdio.h>
#include <mpi.h>

int main(int argc, char **argv) {
 int np, me, sbuf = -1, rbuf = -2,mbuf=1000;
int data[2];
 MPI_Init(&argc,&argv);
 MPI_Comm_size(MPI_COMM_WORLD,&np);
 MPI_Comm_rank(MPI_COMM_WORLD,&me);
 if ( np < 2 ) MPI_Abort(MPI_COMM_WORLD,-1);

 if ( me == 1 ) MPI_Send(&sbuf,1,MPI_INT,0,344,MPI_COMM_WORLD);
if(me==2) MPI_Send( &mbuf,1,MPI_INT,0,344,MPI_COMM_WORLD);
if ( me == 0 ) {
MPI_Recv(data,2,MPI_INT,MPI_ANY_SOURCE, 344,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
 }

 MPI_Finalize();

 return 0;
}

it can successfuly receive the one sent from processor 1(me==1) but it failed to receive the one sent from processor 2(me==2)
mpirun -np 3 hello
There is only one receive, so it receives only one message. When you specify the element count for the receive, you're only specifying the size of the buffer into which the message will be received. Only after the message has been received can you inquire how big the message actually was.

Here is an example:


% cat a.c
#include <stdio.h>
#include <mpi.h>

int main(int argc, char **argv) {
  int np, me, peer, value;


  MPI_Init(&argc,&argv);
  MPI_Comm_size(MPI_COMM_WORLD,&np);
  MPI_Comm_rank(MPI_COMM_WORLD,&me);

  value = me * me + 1;
  if ( me == 0 ) {
    for ( peer = 0; peer < np; peer++ ) {
if ( peer != 0 ) MPI_Recv(&value,1,MPI_INT,peer, 343,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
      printf("peer %d had value %d\n", peer, value);
    }
  }
  else MPI_Send(&value,1,MPI_INT,0,343,MPI_COMM_WORLD);

  MPI_Finalize();

  return 0;
}
% mpirun -np 3 a.out
peer 0 had value 1
peer 1 had value 2
peer 2 had value 5
%

Alternatively,


#include <stdio.h>
#include <mpi.h>

#define MAXNP 1024

int main(int argc, char **argv) {
  int np, me, peer, value, values[MAXNP];


  MPI_Init(&argc,&argv);
  MPI_Comm_size(MPI_COMM_WORLD,&np);
  if ( np > MAXNP ) MPI_Abort(MPI_COMM_WORLD,-1);

  MPI_Comm_rank(MPI_COMM_WORLD,&me);
  value = me * me + 1;

  MPI_Gather(&value, 1, MPI_INT,
             values, 1, MPI_INT, 0, MPI_COMM_WORLD);

  if ( me == 0 )
    for ( peer = 0; peer < np; peer++ )
      printf("peer %d had value %d\n", peer, values[peer]);

  MPI_Finalize();
  return 0;
}
% mpirun -np 3 a.out
peer 0 had value 1
peer 1 had value 2
peer 2 had value 5
%

Which is better? Up to you. The collective routines (like MPI_Gather) do offer MPI implementors (like people developing Open MPI) the opportunity to perform special optimizations (e.g., gather using a binary tree instead of having the root process perform so many receives).

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
Cisco Systems

Reply via email to