[OMPI users] 1.2.4 cross-compilation problem

2007-10-15 Thread Jorge Parra
Hi, I am trying to cross-compile Open-mpi 1.2.4 for an embedded system. The development system is a i686 Linux and the target system is a ppc 405 based. When trying "make all" I get the following error: /bin/sh ../../../libtool --tag=CC --mode=link /opt/powerpc-405-linux/bin/powerpc-405-li

Re: [OMPI users] Performance of MPI_Isend() worse than MPI_Send() and even MPI_Ssend()

2007-10-15 Thread Christian Bell
By using non-blocking communications, you choose to expose separate initiation and synchronization MPI calls such that an MPI implementation is free to schedule the communication in any way it wants between these two points (while retaining MPI semantics). At your advantage, this may mea

Re: [OMPI users] Performance of MPI_Isend() worse than MPI_Send() and even MPI_Ssend()

2007-10-15 Thread George Bosilca
That's one possible way of achieving the overlap. However, it's not a portable solution as right now from all open source libraries, only Open MPI propose this "helper" thread (as far as I know). Another way of achieving the same goal, it's to have a truly thread safe MPI library and the us

Re: [OMPI users] Performance of MPI_Isend() worse than MPI_Send() and even MPI_Ssend()

2007-10-15 Thread Eric Thibodeau
George, For completedness's sake, from what I understand here, the only way to get "true" communications and computation overlap is to have and "MPI broker" thread which would take care of all communications in the form of sync MPI calls. It is that thread which you call asynchronously

Re: [OMPI users] Performance of MPI_Isend() worse than MPI_Send() and even MPI_Ssend()

2007-10-15 Thread George Bosilca
Eric, No there is no documentation about this on Open MPI. However, what I described here, is not related to Open MPI, it's a general problem with most/all MPI libraries. There are multiple scenarios where non blocking communications can improve the overall performance of a parallel appli

Re: [OMPI users] Performance of MPI_Isend() worse than MPI_Send() and even MPI_Ssend()

2007-10-15 Thread Eric Thibodeau
Hello George, What you're saying here is very interesting. I am presently profiling communication patterns for Parallel Genetic Algorithms and could not figure out why the async versions tended to be worst than the sync counterpart (imho, that was counter-intuitive). What you're basical

Re: [OMPI users] mca_oob_tcp_peer_try_connect error when checkpoint and restart.

2007-10-15 Thread Josh Hursey
This problem was caused by a couple of things. First is a problem with the default MCA parameters. By default the global and local snapshot directories are '/tmp', and the mode of file transfer is 'in_place'. 'in_place' file transfer assumes that the global snapshot directory points to an N

Re: [OMPI users] Performance of MPI_Isend() worse than MPI_Send() and even MPI_Ssend()

2007-10-15 Thread George Bosilca
Your conclusion is not necessarily/always true. The MPI_Isend is just the non blocking version of the send operation. As one can imagine, a MPI_Isend + MPI_Wait increase the execution path [inside the MPI library] compared with any blocking point-to-point communication, leading to worst per