Hi Jeff

I believe you misunderstood my comment. My point was that everyone optimizes 
their distributions according to different rules based on a variety of factors. 
Intel MPI targets a specific market segment that allows them to set all those 
nice little parameters such as eager limit to something that is appropriate for 
that segment.

OpenMPI isn’t targeted at any specific segment. So the choice of default params 
is haphazard, primarily based on what made that component work best on the 
system of interest to the specific developer that wrote it. The choice isn’t 
based on market needs, nor is there some product manager driving the overall 
convergence of the params from across the code.

Thus, if someone really wants to compare performance between the two code 
bases, you have to take a little time to tune each of them. OMPI generally 
requires a little more effort than IMPI for the reasons stated above.

The various MPI implementations watch each other like hawks. Since each camp 
knows its code very well, the developers know how to get the most out of their 
own code, and over time have become generally knowledgable on how to tune their 
competitors to something reasonable. If any MPI finds a competitor is more than 
a couple of percent different from their own MPI, they immediately rectify the 
situation.

So when someone sees a giant gap like 20%, you can bet your bottom nickel that 
it is a tuning difference for that specific machine.


> On Nov 24, 2015, at 10:36 AM, Jeff Hammond <jeff.scie...@gmail.com> wrote:
> 
> Ralph:
> 
> Intel MPI supports a wide range of conduits in the same library 
> (https://software.intel.com/sites/default/files/Reference_Manual_1.pdf 
> <https://software.intel.com/sites/default/files/Reference_Manual_1.pdf>, 
> section 3.3.1), which can be selected at runtime, so I don't understand your 
> specialization argument.  Are you suggesting that a lack of CPU-specific 
> optimizations are responsible for the ~20% difference here?  I have a hard 
> time believing that, especially if OpenMPI is compiled with the Intel 
> compiler, which will inline optimized memcpy, etc.
> 
> A more likely source of the difference is the eager limit 
> (I_MPI_EAGER_THRESHOLD), which is 262144 bytes in Intel MPI, which is larger 
> than any of the limits that I can find OpenMPI using.  Perhaps you consider 
> the eager limit a network-specific optimization, although Intel MPI does not 
> treat it that way (the default is not network-specific).  Obviously, it is 
> easy enough for an OpenMPI user to replicate this behavior with MCA settings 
> and see if that makes a difference.
> 
> Of course, since Intel MPI is based on MPICH, there are plenty of intrinsic 
> differences that could explain the performance gap observed.
> 
> Jeff
> 
> On Mon, Nov 23, 2015 at 9:03 AM, Ralph Castain <r...@open-mpi.org 
> <mailto:r...@open-mpi.org>> wrote:
> FWIW: the usual reason for differences between IMPI and OMPI lies in OMPI’s 
> default settings, which are focused on ensuring it can run anywhere as 
> opposed to being optimized for a specific platform. You might look at the MCA 
> params relating to your environment and networks to optimize them for your 
> cluster
> 
> 
>> On Nov 23, 2015, at 8:07 AM, michael.rach...@dlr.de 
>> <mailto:michael.rach...@dlr.de> wrote:
>> 
>> Dear Gilles,
>>  
>> In the meantime the administrators have installed (Thanks!)  OpenMPI-1.10.1 
>> with Intel-16.0.0 on the cluster.
>> I have tested it with our code:  It works.
>> The time spent for MPI-data transmission was the same as with 
>> OpenMPI-1.8.3&Intel-14.0.4, but was ~20% higher than with 
>> IMPI-5.1.1&Intel-16.0.0
>> for the same case running on 3 nodes and 8 procs per node.
>>  
>> Greetings
>>   Michael Rachner
>>  
>>  
>> Von: users [mailto:users-boun...@open-mpi.org 
>> <mailto:users-boun...@open-mpi.org>] Im Auftrag von Gilles Gouaillardet
>> Gesendet: Freitag, 20. November 2015 00:53
>> An: Open MPI Users
>> Betreff: Re: [OMPI users] Bug in Fortran-module MPI of OpenMPI 1.10.0 with 
>> Intel-Ftn-compiler
>>  
>> Michael,
>> 
>> in the mean time, you can use 'mpi_f08' instead of 'use mpi'
>> this is really a f90 binding issue, and f08 is safe
>> 
>> Cheers,
>> 
>> Gilles
>> 
>> On 11/19/2015 10:21 PM, michael.rach...@dlr.de 
>> <mailto:michael.rach...@dlr.de> wrote:
>> Thank You,  Nick and Gilles,
>>  
>> I hope the administrators of the cluster will be so kind  and will update 
>> OpenMPI for me (and others) soon.
>>  
>> Greetings
>> Michael
>>  
>> Von: users [mailto:users-boun...@open-mpi.org 
>> <mailto:users-boun...@open-mpi.org>] Im Auftrag von Gilles Gouaillardet
>> Gesendet: Donnerstag, 19. November 2015 12:59
>> An: Open MPI Users
>> Betreff: Re: [OMPI users] Bug in Fortran-module MPI of OpenMPI 1.10.0 with 
>> Intel-Ftn-compiler
>>  
>> Thanks Nick for the pointer !
>>  
>> Michael,
>>  
>> good news is you do not have to upgrade ifort,
>> but you have to update to 1.10.1
>> (intel 16 changed the way gcc pragmas are handled, and ompi has been made 
>> aware in 1.10.1)
>> 1.10.1 fixes many bugs from 1.10.0, so I strongly encourage anyone to use 
>> 1.10.1
>>  
>> Cheers,
>>  
>> Gilles
>> 
>> On Thursday, November 19, 2015, Nick Papior <nickpap...@gmail.com 
>> <mailto:nickpap...@gmail.com>> wrote:
>> Maybe I can chip in,
>>  
>> We use OpenMPI 1.10.1 with Intel /2016.1.0.423501 without problems.
>>  
>> I could not get 1.10.0 to work, one reason is: 
>> http://www.open-mpi.org/community/lists/users/2015/09/27655.php 
>> <http://www.open-mpi.org/community/lists/users/2015/09/27655.php>
>>  
>> On a side-note, please note that if you require scalapack you may need to 
>> follow this approach:
>> https://software.intel.com/en-us/forums/intel-math-kernel-library/topic/590302
>>  
>> <https://software.intel.com/en-us/forums/intel-math-kernel-library/topic/590302>
>>  
>> 2015-11-19 11:24 GMT+01:00 <michael.rach...@dlr.de 
>> <mailto:michael.rach...@dlr.de>>:
>> Sorry, Gilles,
>>  
>> I cannot  update to more recent versions, because what I used is the newest 
>> combination of OpenMPI and Intel-Ftn  available on that cluster.
>>  
>> When looking at the list of improvements  on the OpenMPI website for  
>> OpenMPI 1.10.1 compared to 1.10.0, I do not remember having seen this item 
>> to be corrected.
>>  
>> Greeting
>> Michael Rachner
>>  
>>  
>> Von: users [mailto: <mailto:users-boun...@open-mpi.org> 
>> <>users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org>] Im Auftrag 
>> von Gilles Gouaillardet
>> Gesendet: Donnerstag, 19. November 2015 10:21
>> An: Open MPI Users
>> Betreff: Re: [OMPI users] Bug in Fortran-module MPI of OpenMPI 1.10.0 with 
>> Intel-Ftn-compiler
>>  
>> Michael,
>> 
>> I remember i saw similar reports.
>> 
>> Could you give a try to the latest v1.10.1 ?
>> And if that still does not work, can you upgrade icc suite and give it an 
>> other try ?
>> 
>> I cannot remember whether this is an ifort bug or the way ompi uses 
>> fortran...
>> 
>> Btw, any reason why you do not
>> Use mpi_f08 ?
>> 
>> HTH
>> 
>> Gilles
>> 
>>  <>michael.rach...@dlr.de <mailto:michael.rach...@dlr.de> wrote:
>> Dear developers of OpenMPI,
>>  
>> I am trying to run our parallelized Ftn-95 code on a Linux cluster with 
>> OpenMPI-1-10.0 and Intel-16.0.0 Fortran compiler.
>> In the code I use the  module MPI  (“use MPI”-stmts).
>>  
>> However I am not able to compile the code, because of compiler error 
>> messages like this:
>>  
>> /src_SPRAY/mpi_wrapper.f90(2065): error #6285: There is no matching specific 
>> subroutin for this generic subroutine call.   [MPI_REDUCE]
>>  
>>  
>> The problem seems for me to be this one:
>>  
>> The interfaces in the module MPI for the MPI-routines do not accept a send 
>> or receive buffer array, which is
>> actually a variable, an array element or a constant (like MPI_IN_PLACE).
>>  
>> Example 1:
>>      This does not work (gives the compiler error message:      error #6285: 
>> There is no matching specific subroutin for this generic subroutine call  )
>>              ivar=123    ! ß ivar is an integer variable, not an array
>>           call MPI_BCAST( ivar, 1, MPI_INTEGER, 0, MPI_COMM_WORLD), ierr_mpi 
>> )    ! ß- this should work, but is not accepted by the compiler
>>  
>>       only this cumbersome workaround works:
>>               ivar=123
>>                 allocate( iarr(1) )
>>                 iarr(1) = ivar
>>          call MPI_BCAST( iarr, 1, MPI_INTEGER, 0, MPI_COMM_WORLD, ierr_mpi ) 
>>    ! ß- this workaround works
>>                 ivar = iarr(1)
>>                 deallocate( iarr(1) )      
>>  
>> Example 2:
>>      Any call of an MPI-routine with MPI_IN_PLACE does not work, like that 
>> coding:
>>  
>>       if(lmaster) then
>>         call MPI_REDUCE( MPI_IN_PLACE, rbuffarr, nelem, MPI_REAL8, MPI_MAX & 
>>    ! ß- this should work, but is not accepted by the compiler
>>                                          ,0_INT4, MPI_COMM_WORLD, ierr_mpi )
>>       else  ! slaves
>>         call MPI_REDUCE( rbuffarr, rdummyarr, nelem, MPI_REAL8, MPI_MAX &
>>                         ,0_INT4, MPI_COMM_WORLD, ierr_mpi )
>>       endif
>>     
>>     This results in this compiler error message:
>>  
>>       /src_SPRAY/mpi_wrapper.f90(2122): error #6285: There is no matching 
>> specific subroutine for this generic subroutine call.   [MPI_REDUCE]
>>             call MPI_REDUCE( MPI_IN_PLACE, rbuffarr, nelem, MPI_REAL8, 
>> MPI_MAX &
>> -------------^
>>  
>>  
>> In our code I observed the bug with MPI_BCAST, MPI_REDUCE, MPI_ALLREDUCE,
>> but probably there may be other MPI-routines with the same kind of bug.
>>  
>> This bug occurred for                               :     OpenMPI-1.10.0  
>> with Intel-16.0.0
>> In contrast, this bug did NOT occur for:     OpenMPI-1.8.8    with 
>> Intel-16.0.0
>>                                                                             
>> OpenMPI-1.8.8    with Intel-15.0.3
>>                                                                             
>> OpenMPI-1.10.0  with gfortran-5.2.0
>>  
>> Greetings
>> Michael Rachner
>> 
>> _______________________________________________
>> users mailing list
>>  <>us...@open-mpi.org <mailto:us...@open-mpi.org>
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users 
>> <http://www.open-mpi.org/mailman/listinfo.cgi/users>
>> Link to this post: 
>> http://www.open-mpi.org/community/lists/users/2015/11/28052.php 
>> <http://www.open-mpi.org/community/lists/users/2015/11/28052.php>
>> 
>> 
>>  
>> -- 
>> Kind regards Nick
>> 
>> 
>> 
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org <mailto:us...@open-mpi.org>
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users 
>> <http://www.open-mpi.org/mailman/listinfo.cgi/users>
>> Link to this post: 
>> http://www.open-mpi.org/community/lists/users/2015/11/28056.php 
>> <http://www.open-mpi.org/community/lists/users/2015/11/28056.php>
>>  
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org <mailto:us...@open-mpi.org>
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users 
>> <http://www.open-mpi.org/mailman/listinfo.cgi/users>
>> Link to this post: 
>> http://www.open-mpi.org/community/lists/users/2015/11/28098.php 
>> <http://www.open-mpi.org/community/lists/users/2015/11/28098.php>
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org <mailto:us...@open-mpi.org>
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users 
> <http://www.open-mpi.org/mailman/listinfo.cgi/users>
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2015/11/28099.php 
> <http://www.open-mpi.org/community/lists/users/2015/11/28099.php>
> 
> 
> 
> -- 
> Jeff Hammond
> jeff.scie...@gmail.com <mailto:jeff.scie...@gmail.com>
> http://jeffhammond.github.io/ 
> <http://jeffhammond.github.io/>_______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2015/11/28109.php

Reply via email to