On Dec 7, 2006, at 5:04 PM, Aaron McDonough wrote:

It turns out that all our IB blades are EM64T - it's just that some have
i686 OS's and some x86_64 OS's. So I think we'll move to all x86_64
installs on IB hosts. I guess if we make the OpenMPI a 32-bit build, and
link against 32-bit IB drivers (my interpretation of the release notes
is that this is supported by the TopSpin driversfor EM64T), then the
same application could run on any host i686 or x86_64. Can this be done

That *should* work, although I have not personally tried it.

with the OFED drivers? I assume that OpenMPI doesn't handle the same

Yes, the OFED stuff should work in both 32 and 64 bit mode. Open MPI doesn't care either way; it just has to be built in one or the other (and linked against the right support libraries).

MPI_COMM_WORLD with different interconnects (TCP vs.IB) - is that right?

Actually, it will. Open MPI will compute reachability on a per-peer- pair basis and use all available networks to reach them. So if you only have TCP connectivity to some hosts, that's what OMPI will use. If you have IB between other hosts, that's what OMPI will use.

However, I would *not* attempt to mix MVAPI and OFED hosts in a single run. This has definitely not been tried. It may or may not work; there may be a few issues with mixing different flavored IB hosts in the same job.

Bw warned: although network heterogeneity can seem like a handy feature, be careful about using this with real applications. Mixing slower and faster networks into a tightly-synchronized MPI application can effectively reduce the overall performance to that of the slower network. If you have a loosely-coupled MPI application, this kind of scenario can be useful. So the overall usefulness is likely to be application dependent.

Hope that helps!

--
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems

Reply via email to