Following your advices and those in the FAQ pages,
I have added the file
$(HOME)/.openmpi/mca-params.conf
with :
btl_mvapi_flags=6
mpi_leave_pinned=1
pml_ob1_leave_pinned_pipeline=1
mpool_base_use_mem_hooks=1
The parameterbtl_mvapi_eager_limit gives the best results, when set
to 8 K or 16
On Thu, 16 Mar 2006, Jean Latour wrote:
> My questions are :
> a) Is OpenMPI doing in this case TCP/IP over IB ? (I guess so)
If the path to the mvapi library is correct then Open MPI will use mvapi
not TCP over IB. There is a simple way to check. "ompi_info --param btl
mvapi" will print all
Hi Jean,
Take a look here: http://www.open-mpi.org/faq/?category=infiniband#ib-
leave-pinned
This should improve performance for micro-benchmarks and some
applications.
Please let mw know if this doesn't solve the issue.
Thanks,
Galen
On Mar 16, 2006, at 10:34 AM, Jean Latour wrote:
Hel
Hello,
Testing performance of OpenMPI over Infiniband I have the following
result :
1) Hardware is : SilversStorm interface
2) Openmpi version is : (from ompi_info)
Open MPI: 1.0.2a9r9159
Open MPI SVN revision: r9159
Open RTE: 1.0.2a9r9159
Open RTE SVN revisi