Xing Feng,

A more focused (and certainly more detailed) analysis of the cost of
different algorithms for collective communications can be found in [1], and
more recently in [2].

  George.

[1]
http://icl.cs.utk.edu/projectsfiles/rib/pubs/Pjesivac-Grbovic_PMEO-PDS05.pdf
[2] https://www.cs.utexas.edu/~echan/vpapers/CCPE2007.pdf


On Wed, Sep 30, 2015 at 3:07 AM, Marc-Andre Hermanns <
m.a.herma...@grs-sim.de> wrote:

> Dear Xing Feng,
>
> there are different algorithms to implement collective communication
> patterns. Next to general Big-O complexity the concrete complexity
> also depends on the network topology, message length, etc..
>
> Therefore many MPI implementations switch between different algorithms
> depending on the concrete communication parameters in a call.
>
> A colleague of mine investigated some MPI implementations (though not
> OpenMPI) [1]. There you can see how different MPI implementations
> (IBM, ParaStation, Cray) scale differently for a selection of
> collective calls. Maybe that helps a little in understanding the
> performance of your application.
>
> Cheers,
> Marc-Andre
>
> [1] http://dl.acm.org/citation.cfm?doid=2751205.2751216
>
>
> On 30.09.15 07:53, XingFENG wrote:
> > Hi, every one,
> >
> > I am working with open-mpi. When I tried to analyse performance of my
> > programs, I find it is hard to understand the communication complexity
> > of MPI routines.
> >
> > I have found some page on Internet such as
> > http://stackoverflow.com/questions/10625643/mpi-communication-complexity
> >
> >
> > This indicates that communication complexity of broadcasting an
> > integer is O(log P) where P is the number of processes. But is it
> > correct on different MPI implementations( OMPI, MPICH, etc.)? Is there
> > an official document discussing such complexity?
> >
> >
> > --
> > Best Regards.
> > ---
> > Xing FENG
> > PhD Candidate
> > Database Research Group
> >
> > School of Computer Science and Engineering
> > University of New South Wales
> > NSW 2052, Sydney
> >
> > Phone: (+61) 413 857 288
> >
> >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> > Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/09/27719.php
> >
>
> --
> Marc-Andre Hermanns
> Jülich Aachen Research Alliance,
> High Performance Computing (JARA-HPC)
> German Research School for Simulation Sciences GmbH
>
> Schinkelstrasse 2
> 52062 Aachen
> Germany
>
> Phone: +49 2461 61 2509
> Fax: +49 241 80 6 99753
> www.grs-sim.de/parallel
> email: m.a.herma...@grs-sim.de
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/09/27720.php
>

Reply via email to