Did you run it with -mca mpi_paffinity_alone 1? Given this is 1.4.1, you can 
set the bindings to -bind-to-socket or -bind-to-core. Either will give you 
improved performance.

IIRC, MVAPICH defaults to -bind-to-socket. OMPI defaults to no binding.


On Feb 15, 2010, at 6:51 PM, Repsher, Stephen J wrote:

> Hello again,
> 
> Hopefully this is an easier question....
> 
> My cluster uses Infiniband interconnects (Mellanox Infinihost III and some 
> ConnectX).  I'm seeing terrible and sporadic latency (order ~1000 
> microseconds)  as measured by the subounce code 
> (http://sourceforge.net/projects/subounce/), but the bandwidth is as 
> expected.  I'm used to seeing only 1-2 microseconds with MVAPICH and 
> wondering why OpenMPI either isn't performing as well or doesn't play well 
> with how bounce is measuring latency (by timing 0 byte messages).  I've tried 
> to play with a few parameters with no success.  Here's how the build is 
> configured:
> 
> myflags="-O3 -xSSE2"
> ./configure --prefix=/part0/apps/MPI/intel/openmpi-1.4.1 \
>            --disable-dlopen --with-wrapper-ldflags="-shared-intel" \
>            --enable-orterun-prefix-by-default \
>            --with-openib --enable-openib-connectx-xrc --enable-openib-rdmacm \
>            CC=icc CXX=icpc F77=ifort FC=ifort \
>            CFLAGS="$myflags" FFLAGS="$myflags" CXXFLAGS="$myflags" 
> FCFLAGS="$myflags" \
>            OBJC=gcc OBJCFLAGS="-O3"
> Any ideas?
> 
> Thanks,
> Steve
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Reply via email to