Here's an interesting data point.  I installed the RHEL rpm version of
OpenMPI 1.2.7-6 for ia64

mpirun -np 2 -mca btl self,sm -mca mpi_paffinity_alone 1 -mca
mpi_leave_pinned 1 $PWD/IMB-MPI1 pingpong

With v1.3 and -mca btl self,sm i get ~150MB/sec
With v1.3 and -mca btl self,tcp i get ~550MB/sec

With v1.2.7-6 and -mca btl self,sm i get ~225MB/sec
With v1.2.7-6 and -mca btl self,tcp i get ~650MB/sec


On Fri, Jul 31, 2009 at 10:42 AM, Edgar Gabriel<gabr...@cs.uh.edu> wrote:
> Michael Di Domenico wrote:
>>
>> mpi_leave_pinned didn't help still at ~145MB/sec
>> btl_sm_eager_limit from 4096 to 8192 pushes me upto ~212MB/sec, but
>> pushing it past that doesn't change it anymore
>>
>> Are there any intelligent programs that can go through and test all
>> the different permutations of tunables for openmpi?  Outside of me
>> just writing an ugly looping script...
>
> actually there is,
>
> http://svn.open-mpi.org/svn/otpo/trunk/
>
> this tool has been used to tune openib parameter, and I would guess that it
> could be used without any modification to also run netpipe over sm...
>
> Thanks
> Edgar
>>
>> On Wed, Jul 29, 2009 at 1:55 PM, Dorian Krause<doriankra...@web.de> wrote:
>>>
>>> Hi,
>>>
>>> --mca mpi_leave_pinned 1
>>>
>>> might help. Take a look at the FAQ for various tuning parameters.
>>>
>>>
>>> Michael Di Domenico wrote:
>>>>
>>>> I'm not sure I understand what's actually happened here.  I'm running
>>>> IMB on an HP superdome, just comparing the PingPong benchmark
>>>>
>>>> HP-MPI v2.3
>>>> Max ~ 700-800MB/sec
>>>>
>>>> OpenMPI v1.3
>>>> -mca btl self,sm - Max ~ 125-150MB/sec
>>>> -mca btl self,tcp - Max ~ 500-550MB/sec
>>>>
>>>> Is this behavior expected?  Are there any tunables to get the OpenMPI
>>>> sockets up near HP-MPI?

Reply via email to