Yes, that was a typo.  mpi_finalize terminates all mpi processings.

On Tue, Feb 1, 2011 at 3:25 AM, Jeff Squyres (jsquyres)
<jsquy...@cisco.com>wrote:

> That's not quite right - a call to MPI-finalize does not terminate any
> processes.
>
> If you're seeing this kind of instability, check the usual suspects such as
> ensuring you have a totally homogeneous environment (same OS, same version
> of OMPI, etc).
>
> Sent from my PDA. No type good.
>
> On Feb 1, 2011, at 4:03 AM, "David Zhang" <solarbik...@gmail.com> wrote:
>
> According to the mpi_finalize documentation, a call to mpi_finalize
> terminate all processes.  I have ran into this problem before where one
> process calls mpi_finalize before other processes reach the same line of
> code and cause errors/hang ups.  Put a mpi_barrier(mpi_comm_world) before
> mpi_finalize would do the trick.
>
> On Mon, Jan 31, 2011 at 11:40 PM, abc def < <cannonj...@hotmail.co.uk>
> cannonj...@hotmail.co.uk> wrote:
>
>>  Hello,
>>
>> I'm having trouble with some MPI programming in Fortran, using openmpi.
>> It seems that my program doesn't work unless I print some unrelated text
>> to the screen. For example, if I have this situation:
>>
>> *** hundreds of lines cut ***
>> IF (irank .eq. 0) THEN
>>     CALL print_results1(variable)
>>     CALL print_results2(more_variable)
>> END IF
>> print *, "done", irank
>> CALL MPI_FINALIZE(ierr)
>> END PROGRAM calculation
>>
>> The results are not printed unless I include this "print done irank"
>> penultimate line.
>> Also, despite seeing that all ranks reach the print statement, the program
>> hangs, as if they have not all reached MPI_FINALIZE.
>>
>> Can anyone help me? Why does it do this?
>>
>> I also had many times where the program would crash if I didn't include a
>> print statement in a loop. I've been doing Fortran programming for a while,
>> and this is my nightmare debugging scenario since I've never been able to
>> figure out why the simple printing of statements magically fixes the
>> program, and I usually end up having to go back to a serial solution, which
>> is really slow.
>>
>> If anyone might be able to help me, I would be really really grateful!!
>>
>> Thank you.
>>
>> Tom
>>
>>
>> _______________________________________________
>> users mailing list
>>  <us...@open-mpi.org>us...@open-mpi.org
>>  <http://www.open-mpi.org/mailman/listinfo.cgi/users>
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
>
>
> --
> David Zhang
> University of California, San Diego
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>



-- 
David Zhang
University of California, San Diego

Reply via email to