Hi,
I am trying to cross-compile Open-mpi 1.2.4 for an embedded system.
The development system is a i686 Linux and the target system is a ppc
405 based. When trying "make all" I get the following error:
/bin/sh ../../../libtool --tag=CC --mode=link
/opt/powerpc-405-linux/bin/powerpc-405-li
By using non-blocking communications, you choose to expose separate
initiation and synchronization MPI calls such that an MPI
implementation is free to schedule the communication in any way it
wants between these two points (while retaining MPI semantics). At
your advantage, this may mea
That's one possible way of achieving the overlap. However, it's not a
portable solution as right now from all open source libraries, only
Open MPI propose this "helper" thread (as far as I know).
Another way of achieving the same goal, it's to have a truly thread
safe MPI library and the us
George,
For completedness's sake, from what I understand here, the only way to
get "true" communications and computation overlap is to have and "MPI broker"
thread which would take care of all communications in the form of sync MPI
calls. It is that thread which you call asynchronously
Eric,
No there is no documentation about this on Open MPI. However, what I
described here, is not related to Open MPI, it's a general problem
with most/all MPI libraries. There are multiple scenarios where non
blocking communications can improve the overall performance of a
parallel appli
Hello George,
What you're saying here is very interesting. I am presently profiling
communication patterns for Parallel Genetic Algorithms and could not figure out
why the async versions tended to be worst than the sync counterpart (imho, that
was counter-intuitive). What you're basical
This problem was caused by a couple of things.
First is a problem with the default MCA parameters. By default the
global and local snapshot directories are '/tmp', and the mode of
file transfer is 'in_place'. 'in_place' file transfer assumes that
the global snapshot directory points to an N
Your conclusion is not necessarily/always true. The MPI_Isend is just
the non blocking version of the send operation. As one can imagine, a
MPI_Isend + MPI_Wait increase the execution path [inside the MPI
library] compared with any blocking point-to-point communication,
leading to worst per