Well, as it says, your processes called MPI_Init, but at least one of them 
exited without calling MPI_Finalize. That violates the MPI rules and we 
therefore terminate the remaining processes.

Check your code and see how/why you are doing that - you probably have a code 
path whereby a process exits without calling finalize.


On Sep 24, 2012, at 4:37 PM, mariana Vargas <mmaria...@yahoo.com.mx> wrote:

> 
> 
> 
> Hi all
> 
> I get this error when I run a paralelized python code in a cluster, could 
> anyone give me an idea of what is happening? I'am new in this Thanks...
> 
> mpirun has exited due to process rank 2 with PID 10259 on
> node f01 exiting improperly. There are two reasons this could occur:
> 
> 1. this process did not call "init" before exiting, but others in
> the job did. This can cause a job to hang indefinitely while it waits
> for all processes to call "init". By rule, if one process calls "init",
> then ALL processes must call "init" prior to termination.
> 
> 2. this process called "init", but exited without calling "finalize".
> By rule, all processes that call "init" MUST call "finalize" prior to
> exiting or it will be considered an "abnormal termination"
> 
> This may have caused other processes in the application to be
> terminated by signals sent by mpirun (as reported here).
> 
> Thanks!!
> 
> 
> 
>> 
>> Dr. Mariana Vargas Magana
>> Astroparticule et Cosmologie - Bureau 409B
>> PHD student- Université Denis Diderot-Paris 7
>> 10, rue Alice Domon et Léonie Duquet
>> 75205 Paris Cedex - France
>> Tel. +33 (0)1 57 27 70 32
>> Fax. +33 (0)1 57 27 60 71
>> mari...@apc.univ-paris7.fr
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to