Thanks Ralph and Gilles. Thanks, Murali
From: users <users-boun...@open-mpi.org<mailto:users-boun...@open-mpi.org>> on behalf of Ralph Castain <r...@open-mpi.org<mailto:r...@open-mpi.org>> Reply-To: Open MPI Users <us...@open-mpi.org<mailto:us...@open-mpi.org>> Date: Sunday, April 3, 2016 at 6:41 AM To: Open MPI Users <us...@open-mpi.org<mailto:us...@open-mpi.org>> Subject: Re: [OMPI users] Question on MPI_Comm_spawn timing I honestly don’t think anyone has been concerned about the speed of MPI_Comm_spawn, and so there hasn’t been any effort made to optimize it On Apr 3, 2016, at 2:52 AM, Gilles Gouaillardet <gilles.gouaillar...@gmail.com<mailto:gilles.gouaillar...@gmail.com>> wrote: Hi, performance of MPI_Comm_spawn in the v1.8/v1.10 series is known to be poor, especially compared to v1.6 generally speaking, I cannot recommend v1.6 since it is no more maintained. that being said, if performance is critical, you might want to give it a try. I did not run any performance measurement with master, especially since we moved to PMIx, that might be worth a try too Cheers, Gilles On Sunday, April 3, 2016, Emani, Murali <ema...@llnl.gov<mailto:ema...@llnl.gov>> wrote: Hi all, I am trying to evaluate the time taken for MPI_Comm_spawn operation in the latest version of OpenMPI. Here a parent communicator (all processes, not just the root) spawns one new child process (separate executable). The code I¹m executing is main(){ { Š.. // MPI initialization Š.. start1 = MPI_Wtime(); MPI_Comm_spawn(³./child", MPI_ARGV_NULL,1, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &inter_communicator, MPI_ERRCODES_IGNORE ); End = MPI_Wtime(); printf(³ spawn time: %f², (end-start)); MPI_Barrier(inter_communicator); // spawn is collective, but still want to ensure it using a barrier .. .. // MPI finalize } In child.c main(){ { Š.. // MPI initialization Š.. MPI_Comm_get_parent(&parentcomm); // gets the inter-communicator MPI_Barrier(parentcomm); .. .. // MPI finalize } My observation is that the spawn time is very high (almost 80% of the total program execution time). It increases exponentially with the number of processes in the parent communicator. Is this method correct, and is the MPI_Comm_spawn operation expensive. I have also tried MPI_Comm_spawn_multiple but it still measures the same time. Could kindly someone guide me in this issue. Thanks, Murali _______________________________________________ users mailing list us...@open-mpi.org<javascript:;> Subscription: http://secure-web.cisco.com/1z0v_U78rf_0ofSUeyHRS36Fj-mk74BguweaGfG7pX9MxfOcN1eiC_hUDhW9vqTMtTPbrFNAMQHqAtrLtbFTpAjduzGF-kqmEYhcbTlFzHJ1zzF6H0czF7KD40VyYqVvjMk3GhonQ4c-TX7IpOmyqwdsds5OIz01wDIsfGBVxLqsYKCDNsS2ulGqDi3aoOT2VIeTn1yYAOAzLdVkdqP4cnPbmpreqJwAdREmXahmtoD5lAQV2FJXI6Fzm1Hdk0lpO6gHzDuQ7aAUW4jlUuTczHpYKKg9t_JpfzcF-WWZgKGPvB-9YhFQL-SPHw6iWqpCFho36EeumgHWN3oRw-nOHp1QZEh6fPaMb3_yaeErV3Gc/http%3A%2F%2Fwww.open-mpi.org%2Fmailman%2Flistinfo.cgi%2Fusers Link to this post: http://secure-web.cisco.com/13MtbvneBMZbxflPfKcY3Ej3Lqiwgo-u3nP2qeSvXFzeJ5lrH_QoikbeMEiFrL1D2BGSXO2U7qcdCyzPyKzhCWHiYm4O92e3jpXTu4lX2cEAQUo-o8DSsAhMi_UQeIKIYLIkTvELf3WM-qqo7oK2VU6uvtyrJO6WpJ_0OW-Nupk-V4sRGUb3WXFTT2Bq9GnU6NtjpNql2If90LZkTsaBAlsoxVx-4oNdLmiOuHIyIb5xvRx-FRvSL8Pr8ZHNmUYMSqdx-tU2PgMFbjivrVbXcjPfDkYCvcyOz9i3BSCxRfJwSdeDu1sRwfkk8Jf6kEcrNiIGO5EXzUo1xQyjNJyCd73gR3bGcqT-i_uwyn_Iw2_I/http%3A%2F%2Fwww.open-mpi.org%2Fcommunity%2Flists%2Fusers%2F2016%2F04%2F28871.php _______________________________________________ users mailing list us...@open-mpi.org<mailto:us...@open-mpi.org> Subscription: http://secure-web.cisco.com/1z0v_U78rf_0ofSUeyHRS36Fj-mk74BguweaGfG7pX9MxfOcN1eiC_hUDhW9vqTMtTPbrFNAMQHqAtrLtbFTpAjduzGF-kqmEYhcbTlFzHJ1zzF6H0czF7KD40VyYqVvjMk3GhonQ4c-TX7IpOmyqwdsds5OIz01wDIsfGBVxLqsYKCDNsS2ulGqDi3aoOT2VIeTn1yYAOAzLdVkdqP4cnPbmpreqJwAdREmXahmtoD5lAQV2FJXI6Fzm1Hdk0lpO6gHzDuQ7aAUW4jlUuTczHpYKKg9t_JpfzcF-WWZgKGPvB-9YhFQL-SPHw6iWqpCFho36EeumgHWN3oRw-nOHp1QZEh6fPaMb3_yaeErV3Gc/http%3A%2F%2Fwww.open-mpi.org%2Fmailman%2Flistinfo.cgi%2Fusers Link to this post: http://secure-web.cisco.com/1Iw8n_xjvr1cInNKbFh8730whotP6hbxpFj-u8Z0n_SmcsfaJHY42pPRsNDcvV-fXHjHoyf0UW5vW43x5-724wT6QS5GGEI7zNGcj24W6TfyzVRhhEFfFoFuODUG3HsLB19QyiUx96e3pN62suKOegK-BpnRSinst01viAL5bcJg2YHvuhlSlXaxO6eYx1RQf0GMFihZV_5OWT-GpaRpGW3YoSQZT94z7yWpL92D3bxesZdBWCGgy-uxuXePTekRfFwTZPGi26vu-9kMvABX8OOVzZlhJb8PA4E3urjAVDvJ9Uwclk2m1aM0EQRuqnT2QaXY6FTxMMO0jTcyLQKSoURrJRhH_cnxuOMyo_YrqSUY/http%3A%2F%2Fwww.open-mpi.org%2Fcommunity%2Flists%2Fusers%2F2016%2F04%2F28872.php