ouldn't
> do, and doesn't do it as well. You would be far better off just adding
> --bind-to-core to the mpirun cmd line.
"mpirun -h" says that it is the default, so there is not even something to do?
I don't even have to add "--mca mpi_paffinity_alone 1" ?
Le Wednesday 28 July 2010 11:34:13 Ralph Castain, vous avez écrit :
> On Jul 27, 2010, at 11:18 PM, Yves Caniou wrote:
> > Le Wednesday 28 July 2010 06:03:21 Nysal Jan, vous avez écrit :
> >> OMPI_COMM_WORLD_RANK can be used to get the MPI rank. For other
> >> enviro
doesn't answer to my question.
.Yves.
> --Nysal
>
> On Wed, Jul 28, 2010 at 9:04 AM, Yves Caniou wrote:
> > Hi,
> >
> > I have some performance issue on a parallel machine composed of nodes of
> > 16 procs each. The application is launched on multiple of 16 proc
Hi,
I have some performance issue on a parallel machine composed of nodes of 16
procs each. The application is launched on multiple of 16 procs for given
numbers of nodes.
I was told by people using MX MPI with this machine to attach a script to
mpiexec, which 'numactl' things, in order to make
Le Wednesday 02 June 2010 15:55:37, vous avez écrit :
> On Jun 2, 2010, at 9:50 AM, Yves Caniou wrote:
> > I copy the output of my last mail at the end of this one, to avoid
> > searching. Here is the line that I used to configure OMPI:
> >
> > $>./configure --pref
I forgot the list...
-
Le Wednesday 02 June 2010 14:59:46, vous avez écrit :
> On Jun 2, 2010, at 8:03 AM, Ralph Castain wrote:
> > I built it with gcc 4.2.1, though - I know we have a problem with shared
> > memory hanging when built with gcc 4.4.x, so I wonder if the issue here
> > is y
v1.4.2)
MCA ess: tool (MCA v2.0, API v2.0, Component v1.4.2)
MCA grpcomm: bad (MCA v2.0, API v2.0, Component v1.4.2)
MCA grpcomm: basic (MCA v2.0, API v2.0, Component v1.4.2)
--
Yves Caniou
Associate Professor at Université Lyon 1,
Member of the team pr
l variable?
Thank you!
.Yves.
--
Yves Caniou
Associate Professor at Université Lyon 1,
Member of the team project INRIA GRAAL in the LIP ENS-Lyon,
Délégation CNRS in Japan French Laboratory of Informatics (JFLI),
* in Information Technology Center, The University of Tokyo,
2-11-16 Yayoi, Bunkyo
27;m clarifying)
>
> On May 24, 2010, at 2:53 AM, Yves Caniou wrote:
> > I rechecked, but didn't see anything wrong.
> > Here is how I set my environment. Tkx.
> >
> > $>mpicc --v
> > Using built-in specs.
> > COLLECT_GCC=//home/p10015/gcc/bin/x8
e from 1.4.1,
> and that your environment is pointing to the right place.
>
> On May 24, 2010, at 12:15 AM, Yves Caniou wrote:
> > Dear All,
> > (follows a previous mail)
> >
> > I don't understand the strange behavior of this small code: sometimes it
> >
num, mpi_size ;
int flag ;
MPI_Init(&argc, &argv) ;
MPI_Comm_rank(MPI_COMM_WORLD, &my_num);
printf("%d calls MPI_Finalize()\n\n\n", my_num) ;
MPI_Finalize() ;
MPI_Finalized(&flag) ;
printf("MPI finalized: %d\n", flag) ;
return 0 ;
}
---
--
Yve
Dear all,
I use the following code:
#include "stdlib.h"
#include "stdio.h"
#include "mpi.h"
#include "math.h"
#include "unistd.h" /* sleep */
int my_num, mpi_size ;
int
main(int argc, char *argv[])
{
MPI_Init(&argc, &argv) ;
MPI_Comm_rank(MPI_COMM_WORLD, &my_num);
MPI_Comm_size(MPI_COMM_
out without problem.
>
> FWIW, you should also be able to invoke the MPI_Finalized function to see
> if MPI_Finalize has already been invoked.
>
> On May 7, 2010, at 12:54 AM, Yves Caniou wrote:
> > Dear All,
> >
> > My parallel application ends when each process rec
c can make the call to
MPI_Finalize() and obtain an execution without error messages?
Thank you for any help.
.Yves.
--
Yves Caniou
Associate Professor at Université Lyon 1,
Member of the team project INRIA GRAAL in the LIP ENS-Lyon,
Délégation CNRS in Japan French Laboratory of Informatics (JFL
> > then you can find it on the web site:
> > >
> > > http://www.open-mpi.org/nightly/trunk/
> > >
> > > Use the -q or --quiet option and the message will be suppressed. I will
> > > request that this be included in the upcoming 1.4.2 and 1.5.0 rel
;
> There is a current MPI Forum working on the 3.0 version of the MPI
> standard. Do you think they should be considering am MPI_Quit subroutine?
>
>
> Dick Treumann - MPI Team
> IBM Systems & Technology Group
> Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 126
t; Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
> Tele (845) 433-7846 Fax (845) 433-8363
I don't understand how your question is related to mine, since in my case, the
application ends correctly and I don't want any output. :?
--
Yves Caniou
Associate Pro
ok for the application error message about the root cause. A short
> MPI_Abort() message that said "look elsewhere for the real error message"
> would be useful.
>
> Cheers,
> David
>
> On 03/31/2010 07:58 PM, Yves Caniou wrote:
> > Dear all,
> >
> > I
gt; Subject: Re: [OMPI users] Hide Abort output
> >
> >
> > I have to say this is a very common issue for our users. They repeatedly
> > report the long Open MPI MPI_Abort() message in help queries and fail to
> > look for the application error message about the root caus
more information (the stack).
Is there a mean to avoid the printing of the note (except the 2>/dev/null
tips)? Or to delay this printing?
Thank you.
.Yves.
--
Yves Caniou
Associate Professor at Université Lyon 1,
Member of the team project INRIA GRAAL in the LIP ENS-Lyon,
Délégation CNRS
20 matches
Mail list logo