Indeed, it seems that it addresses what I want!

I read the discussions on the MPI Forum list, which is very interesting.
I began to develop a terminaison code before seeing that the use of 
MPI_Abort() should be sufficient. 
But I didn't post anything, since my case is particular: I have iterative 
computations. Thus, I can check if any terminaison message has been received 
at some points (with the async receive at the beginning of the program) -- 
the sending of messages has to be done in a "recursive" way to ensure a 
smaller number of messages exchanged between tasks, because there's 
not "multicast" way of sending something.

In my case, I don't need special ending requirements if tasks share files, 
etc., which is not the general case of the standardization of an API.
But I still think that an MPI_Quit() would be very usefull.

Thank you very much!

.Yves.

Le Tuesday 06 April 2010 22:40:29 Jeff Squyres, vous avez écrit :
> BTW, we diverged quite a bit on this thread -- Yves -- does the
> functionality that was fixed by Ralph address your original issue?
>
> On Apr 2, 2010, at 10:21 AM, Ralph Castain wrote:
> > Testing found that I had missed a spot here, so we weren't fully
> > suppressing messages (including MPI_Abort). So the corrected fix is in
> > r22926, and will be included in tonight's tarball.
> >
> > I also made --quiet be a new MCA param orte_execute_quiet so you can put
> > it in your environment instead of only on the cmd line.
> >
> > HTH
> > Ralph
> >
> > On Apr 2, 2010, at 1:18 AM, Ralph Castain wrote:
> > > Actually, a cmd line option to mpirun already existed for this purpose.
> > > Unfortunately, it wasn't properly being respected, so even knowing
> > > about it wouldn't have helped.
> > >
> > > I have fixed this as of r22925 on our developer's trunk and started the
> > > script to generate a fresh nightly tarball. Give it a little time and
> > > then you can find it on the web site:
> > >
> > > http://www.open-mpi.org/nightly/trunk/
> > >
> > > Use the -q or --quiet option and the message will be suppressed. I will
> > > request that this be included in the upcoming 1.4.2 and 1.5.0 releases.
> > >
> > > On Apr 1, 2010, at 8:38 PM, Yves Caniou wrote:
> > >> For information, I use the debian-packaged OpenMPI 1.4.1.
> > >>
> > >> Cheers.
> > >>
> > >> .Yves.
> > >>
> > >> Le Wednesday 31 March 2010 12:41:34 Jeff Squyres (jsquyres), vous avez 
écrit :
> > >>> At present there is no such feature, but it should not be hard to
> > >>> add.
> > >>>
> > >>> Can you guys be a little more specific about exactly what you are
> > >>> seeing and exactly what you want to see?  (And what version you're
> > >>> working with - I'll caveat my discussion that this may be a
> > >>> 1.5-and-forward thing)
> > >>>
> > >>> -jms
> > >>> Sent from my PDA.  No type good.
> > >>>
> > >>> ----- Original Message -----
> > >>> From: users-boun...@open-mpi.org <users-boun...@open-mpi.org>
> > >>> To: Open MPI Users <us...@open-mpi.org>
> > >>> Sent: Wed Mar 31 05:38:48 2010
> > >>> Subject: Re: [OMPI users] Hide Abort output
> > >>>
> > >>>
> > >>> I have to say this is a very common issue for our users.  They
> > >>> repeatedly report the long Open MPI MPI_Abort() message in help
> > >>> queries and fail to look for the application error message about the
> > >>> root cause.  A short MPI_Abort() message that said "look elsewhere
> > >>> for the real error message" would be useful.
> > >>>
> > >>> Cheers,
> > >>> David
> > >>>
> > >>> On 03/31/2010 07:58 PM, Yves Caniou wrote:
> > >>>> Dear all,
> > >>>>
> > >>>> I am using the MPI_Abort() command in a MPI program.
> > >>>> I would like to not see the note explaining that the command caused
> > >>>> Open MPI to kill all the jobs and so on.
> > >>>> I thought that I could find a --mca parameter, but couldn't grep it.
> > >>>> The only ones deal with the delay and printing more information (the
> > >>>> stack).
> > >>>>
> > >>>> Is there a mean to avoid the printing of the note (except the
> > >>>> 2>/dev/null tips)? Or to delay this printing?
> > >>>>
> > >>>> Thank you.
> > >>>>
> > >>>> .Yves.
> > >>>
> > >>> _______________________________________________
> > >>> users mailing list
> > >>> us...@open-mpi.org
> > >>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> > >>
> > >> --
> > >> Yves Caniou
> > >> Associate Professor at Université Lyon 1,
> > >> Member of the team project INRIA GRAAL in the LIP ENS-Lyon,
> > >> Délégation CNRS in Japan French Laboratory of Informatics (JFLI),
> > >> * in Information Technology Center, The University of Tokyo,
> > >>   2-11-16 Yayoi, Bunkyo-ku, Tokyo 113-8658, Japan
> > >>   tel: +81-3-5841-0540
> > >> * in National Institute of Informatics
> > >>   2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan
> > >>   tel: +81-3-4212-2412
> > >> http://graal.ens-lyon.fr/~ycaniou/
> > >>
> > >> _______________________________________________
> > >> users mailing list
> > >> us...@open-mpi.org
> > >> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
Yves Caniou
Associate Professor at Université Lyon 1,
Member of the team project INRIA GRAAL in the LIP ENS-Lyon,
Délégation CNRS in Japan French Laboratory of Informatics (JFLI),
  * in Information Technology Center, The University of Tokyo,
    2-11-16 Yayoi, Bunkyo-ku, Tokyo 113-8658, Japan
    tel: +81-3-5841-0540
  * in National Institute of Informatics
    2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan
    tel: +81-3-4212-2412 
http://graal.ens-lyon.fr/~ycaniou/

Reply via email to