Kechagias Apostolos wrote:
Sure it helps. I had no idea about this source.
I hope that it is up to date.

As far as I can tell the figure is up to date.
Here it is again in the MPI Forum:
http://www.mpi-forum.org/docs/mpi21-report/node85.htm
http://www.mpi-forum.org/docs/mpi21-report/mpi21-report.htm#Node0

The book would be better, because the text is also clarifying,
and there are more diagrams.

This tutorial from Lawrence Livermore is also good and
has good diagrams:
https://computing.llnl.gov/tutorials/mpi/

Gus Correa



2010/12/13 Gus Correa <g...@ldeo.columbia.edu <mailto:g...@ldeo.columbia.edu>>

    Hi Kechagias

    The figures in Chapter 4 of
    "MPI: The Complete Reference, Vol 1, 2nd Ed.",
    by Snir et. al. are good reminders.

    Here are a few:
    //www.dartmouth.edu/~rc/classes/intro_mpi/mpi_comm_modes2.html#top
    <http://www.dartmouth.edu/~rc/classes/intro_mpi/mpi_comm_modes2.html#top>

    I hope this helps,
    Gus Correa

    Kechagias Apostolos wrote:

        I thought that every process will receive the data as is.
        Thanks that solved my problem.

        2010/12/13 Gus Correa <g...@ldeo.columbia.edu
        <mailto:g...@ldeo.columbia.edu> <mailto:g...@ldeo.columbia.edu
        <mailto:g...@ldeo.columbia.edu>>>


           Kechagias Apostolos wrote:

               I have the code that is in the attachment.
               Can anybody explain how to use scatter function?
               It seems that this way im using it doesnt do the job.


------------------------------------------------------------------------


               _______________________________________________
               users mailing list
               us...@open-mpi.org <mailto:us...@open-mpi.org>
        <mailto:us...@open-mpi.org <mailto:us...@open-mpi.org>>

               http://www.open-mpi.org/mailman/listinfo.cgi/users

           #include <stdio.h>
           #include <stdlib.h>
           #include <string.h>
           #include <mpi.h>

           int main(int argc, char *argv[])
           {
                  int  error_code, err, rank, size, N, i, N1, start, end;
                             float  W, pi=0, sum=0;


                             MPI_Init(&argc, &argv);
                  MPI_Comm_rank( MPI_COMM_WORLD, &rank);
                  MPI_Comm_size( MPI_COMM_WORLD, &size);

                  N=atoi(argv[1]);

                  int n[N],data[N];

                  N1 = N/size;
                  W=1.0/N;
                  //printf("N1:%d W:%f\n",N1,W);

                                                if(size<2)
                  {
                          printf("You must have 2 or more ranks to complete
           this action\n");
                          MPI_Abort(MPI_COMM_WORLD,err);
                  }
                  if(argc<2)
                  {
                          printf("Not enough arguments given\n");
                          MPI_Abort(MPI_COMM_WORLD,err);             }



                  if(rank == 0) {for(i=0;i<N;i++) n[i]=i;}
                             MPI_Scatter (n, N1, MPI_INT,data,
        N1,MPI_INT, 0, MPI_COMM_WORLD);

                  pi = 0;
                             start = rank*N1;
              end = (rank+1)*N1;

                  for(i=data[start];i<data[end];i++)
           pi+=4*W/(1+(i+0.5)*(i+0.5)*W*W);
// printf("rank:%d tmppi:%f\n",rank,pi); printf("data[start]:%d data[end]:%d ",data[start],data[end]);

              printf("rankN1:%d rank+1N1:%d\n",start,end);
                  MPI_Reduce(&pi, &sum, 1, MPI_FLOAT, MPI_SUM, 0,
        MPI_COMM_WORLD);


                  if (rank == 0) printf("Pi is:%f size:%d\n",sum,size);
                                               MPI_Finalize();
           }


           #########
           Hi Kechagias

           If you use MPI_Scatter, the receive buffers start receiving
           at the zero offset (i.e. at data[0]), not at data[start].
           Also, your receive buffers could have size N1, not N.
           I guess the MPI_Scatter call is right.
           The subsequent code needs to change.
           The loop should go from data[0] to data[N1-1].
           (However, be careful with edge cases where the number
           of processes doesn't divide N evenly.)

           Alternatively you could use MPI_Alltoallw to scatter the way
           your code suggests you want to do, but that would be an overkill.

           _______________________________________________
           users mailing list
           us...@open-mpi.org <mailto:us...@open-mpi.org>
        <mailto:us...@open-mpi.org <mailto:us...@open-mpi.org>>

           http://www.open-mpi.org/mailman/listinfo.cgi/users



        ------------------------------------------------------------------------

        _______________________________________________
        users mailing list
        us...@open-mpi.org <mailto:us...@open-mpi.org>
        http://www.open-mpi.org/mailman/listinfo.cgi/users


    _______________________________________________
    users mailing list
    us...@open-mpi.org <mailto:us...@open-mpi.org>
    http://www.open-mpi.org/mailman/listinfo.cgi/users



------------------------------------------------------------------------

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to