I honestly don’t think anyone has been concerned about the speed of 
MPI_Comm_spawn, and so there hasn’t been any effort made to optimize it


> On Apr 3, 2016, at 2:52 AM, Gilles Gouaillardet 
> <gilles.gouaillar...@gmail.com> wrote:
> 
> Hi,
> 
> performance of MPI_Comm_spawn in the v1.8/v1.10 series is known to be poor, 
> especially compared to v1.6
> 
> generally speaking, I cannot recommend v1.6 since it is no more maintained.
> that being said, if performance is critical, you might want to give it a try.
> 
> I did not run any performance measurement with master, especially since we 
> moved to PMIx,
> that might be worth a try too
> 
> Cheers,
> 
> Gilles
> 
> On Sunday, April 3, 2016, Emani, Murali <ema...@llnl.gov 
> <mailto:ema...@llnl.gov>> wrote:
> Hi all,
> 
> I am trying to evaluate the time taken for MPI_Comm_spawn operation in the
> latest version of OpenMPI. Here a parent communicator (all processes, not
> just the root) spawns one new child process (separate executable). The
> code I¹m executing is
> 
> main(){
> {
> Š..
> // MPI initialization
> Š..
> start1 = MPI_Wtime();
> MPI_Comm_spawn(³./child", MPI_ARGV_NULL,1, MPI_INFO_NULL, 0,
> MPI_COMM_WORLD, &inter_communicator, MPI_ERRCODES_IGNORE );
> End = MPI_Wtime();
> 
> printf(³ spawn time: %f², (end-start));
> MPI_Barrier(inter_communicator); // spawn is collective, but still want to
> ensure it using a barrier
> ..
> ..
> // MPI finalize
> }
> 
> 
> In child.c
> main(){
> {
> Š..
> // MPI initialization
> Š..
> 
> MPI_Comm_get_parent(&parentcomm); // gets the inter-communicator
> MPI_Barrier(parentcomm);
> ..
> ..
> // MPI finalize
> }
> 
> My observation is that the spawn time is very high (almost 80% of the
> total program execution time). It increases exponentially with the number
> of processes in the parent communicator. Is this method correct, and is
> the MPI_Comm_spawn operation expensive.
> I have also tried MPI_Comm_spawn_multiple but it still measures the same
> time.
> 
> Could kindly someone guide me in this issue.
> 
> Thanks,
> Murali
> 
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org <javascript:;>
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users 
> <http://www.open-mpi.org/mailman/listinfo.cgi/users>
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2016/04/28871.php 
> <http://www.open-mpi.org/community/lists/users/2016/04/28871.php>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2016/04/28872.php

Reply via email to