Same performance problems with that fix. In fact, if I ever use tcp currently, OpenMPI crashes...

-Mike

George Bosilca wrote:

If there are several networks available between 2 nodes they will get selected. That can lead to poor performances in the case when the second network is a high latency one (like TCP). If you want to insure that only the IB driver is loaded you have to add in the .openmpi/mca-params.conf the following line:
btl_base_exclude=tcp
This will force the TCP driver to be unloaded (always). In order to use this option you have to have all nodes reacheable via IB.

  Thanks,
    george.

On Oct 31, 2005, at 10:50 AM, Mike Houston wrote:

When only sending a few messages, we get reasonably good IB performance,
~500MB/s (MVAPICH is 850MB/s).  However, if I crank the number of
messages up, we drop to 3MB/s(!!!).  This is with the OSU NBCL
mpi_bandwidth test.  We are running Mellanox IB Gold 1.8 with 3.3.3
firmware on PCI-X (Couger) boards.  Everything works with MVAPICH, but
we really need the thread support in OpenMPI.

Ideas?  I noticed there are a plethora of runtime options configurable
for mvapi.  Do I need to tweak these to get performacne up?

Thanks!

-Mike
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


"Half of what I say is meaningless; but I say it so that the other half may reach you"
                                  Kahlil Gibran



Reply via email to