Sorry to jump in late on this thread, but here's my thoughts:
1. Your initial email said "threads", not "processes". I assume you actually
meant "processes" (having multiple threads calls MPI_FINALIZE is erroneous).
2. Periodically over the years, we have gotten the infrequent request to
suppo
I am sorry for the delay in replying; this week got a bit crazy on me.
I'm guessing that Open MPI is striping across both your eth0 and ib0 interfaces.
You can limit which interfaces it uses with the btl_tcp_if_include MCA param.
For example:
# Just use eth0
mpirun --mca btl tcp,sm,sel
Are you able to upgrade to Open MPI 1.8.x, perchance?
On May 20, 2014, at 9:28 AM, "Cordone, Guthrie" wrote:
> Hello,
>
> I have two linux machines, each running Open MPI 1.6.5. I want to use the
> preload binary command in an appfile to execute a binary from the host on
> both the node and
It looks like the answer is "no", Jeff - at least, that didn't solve it for
folks on the bug tracker cited by George. Setting the CFLAGS seemed the only
solution until valgrind resolves the issue, and since that bug is a couple of
years old with no further activity, it seems unlikely that will h