formance if it isn't
> necessary.
>
>
> If you want an application to be able to span that mix, then you'll
> need to set that configure flag.
>
> On Thu, Oct 7, 2010 at 1:44 PM, David Ronis
> wrote:
> I have various boxes that run openmpi and I c
I have various boxes that run openmpi and I can't seem to use all of
them at once because they have different CPU's (e.g., pentiums, athlons
(both 32 bit) vs Intel I7 (64 bit)). I'm about the build 1.4.3 and was
wondering if I should add --enable-heterogenous to the configure flags.
Any advice as
I have various boxes that run openmpi and I can't seem to use all of
them at once because they have different CPU's (e.g., pentiums, athlons
(both 32 bit) vs Intel I7 (64 bit)). I'm about the build 1.4.3 and was
wondering if I should add --enable-heterogenous to the configure flags.
Any advice as
YSV), SVR4-style, from 'abort'
> [9:52] svbu-mpi:~/mpi %
> -
>
> You can see that all processes die immediately, and I get a corefile from the
> process that called abort().
>
>
> On Aug 16, 2010, at 9:25 AM, David Ronis wrote:
>
> > I've t
David
O
n Mon, 2010-08-16 at 08:51 -0700, Jeff Squyres wrote:
> On Aug 13, 2010, at 12:53 PM, David Ronis wrote:
>
> > I'm using mpirun and the nodes are all on the same machin (a 8 cpu box
> > with an intel i7). coresize is unlimited:
> >
> > ulimit -a
>
I'm using mpirun and the nodes are all on the same machin (a 8 cpu box
with an intel i7). coresize is unlimited:
ulimit -a
core file size (blocks, -c) unlimited
David
n Fri, 2010-08-13 at 13:47 -0400, Jeff Squyres wrote:
> On Aug 13, 2010, at 1:18 PM, David Ron
, but this doesn't explain why the node calling
abort doesn't exit with a coredump.
David
On Thu, 2010-08-12 at 20:44 -0600, Ralph Castain wrote:
> Sounds very strange - what OMPI version, on what type of machine, and how was
> it configured?
>
>
> On Aug 12, 2010, at 7:
I've got a mpi program that is supposed to to generate a core file if
problems arise on any of the nodes. I tried to do this by adding a
call to abort() to my exit routines but this doesn't work; I get no core
file, and worse, mpirun doesn't detect that one of my nodes has
aborted(?) and doesn't
That did it. Thanks.
David
On Wed, 2010-07-21 at 15:29 -0500, Dave Goodell wrote:
> On Jul 21, 2010, at 2:54 PM CDT, Jed Brown wrote:
>
> > On Wed, 21 Jul 2010 15:20:24 -0400, David Ronis
> > wrote:
> >> Hi Jed,
> >>
> >> Thanks for t
it plans to use.
David
On Wed, 2010-07-21 at 21:54 +0200, Jed Brown wrote:
> On Wed, 21 Jul 2010 15:20:24 -0400, David Ronis wrote:
> > Hi Jed,
> >
> > Thanks for the reply and suggestion. I tried adding -mca
> > yield_when_idle 1 (and later mpi_yield_when_idle 1 wh
2010-07-21 at 20:24 +0200, Jed Brown wrote:
> On Wed, 21 Jul 2010 14:10:53 -0400, David Ronis wrote:
> > Is there another MPI routine that polls for data and then gives up its
> > time-slice?
>
> You're probably looking for the runtime option -mca yield_when_idle 1.
> T
I've got a mpi program on an 8-core box that runs in a master-slave
mode. The slaves calculate something, pass data to the master, and
then call MPI_Bcast waiting for the master to update and return some
data via a MPI_Bcast originating on the master.
One of the things the master does while th
(This may be a duplicate. An earlier post seems to have been lost).
I'm using openmpi (1.3.2) to run on 3 dual processor machines (running
linux, slackware-12.1, gcc-4.4.0). Two are directly on my LAN while
the 3rd is connected to my LAN via VPN and NAT (I can communicate in
either direction fro
13 matches
Mail list logo