On Jan 23, 2009, at 6:32 AM, Gabriele Fatigati wrote:

I've noted that OpenMPI has an asynchronous behaviour in the collective calls.
The processors, doesn't wait that other procs arrives in the call.

That is correct.

This behaviour sometimes can cause some problems with a lot of
processors in the jobs.

Can you describe what exactly you mean? The MPI spec specifically allows this behavior; OMPI made specific design choices and optimizations to support this behavior. FWIW, I'd be pretty surprised if any optimized MPI implementation defaults to fully synchronous collective operations.

Is there an OpenMPI parameter to lock all process in the collective
call until is finished? Otherwise  i have to insert many MPI_Barrier
in my code and it is very tedious and strange..

As you have notes, MPI_Barrier is the *only* collective operation that MPI guarantees to have any synchronization properties (and it's a fairly weak guarantee at that; no process will exit the barrier until every process has entered the barrier -- but there's no guarantee that all processes leave the barrier at the same time).

Why do you need your processes to exit collective operations at the same time?

--
Jeff Squyres
Cisco Systems

Reply via email to