On Jan 8, 2007, at 9:34 PM, Reese Faucette wrote:
Right, that's the maximum number of open MX channels, i.e. processes
than can run on the node using MX. With MX (1.2.0c I think), I get
weird messages if I run a second mpirun quickly after the first one
failed. The myrinet guys, I quite sure, c
Right, that's the maximum number of open MX channels, i.e. processes
than can run on the node using MX. With MX (1.2.0c I think), I get
weird messages if I run a second mpirun quickly after the first one
failed. The myrinet guys, I quite sure, can explain why and how.
Somehow, when an application
On Jan 8, 2007, at 9:11 PM, Reese Faucette wrote:
Second thing. From one of your previous emails, I see that MX
is configured with 4 instance by node. Your running with
exactly 4 processes on the first 2 nodes. Weirds things might
happens ...
4 processes per node will be just fine. This is n
Second thing. From one of your previous emails, I see that MX
is configured with 4 instance by node. Your running with
exactly 4 processes on the first 2 nodes. Weirds things might
happens ...
4 processes per node will be just fine. This is not like GM where the 4
includes some "reserved" port
Not really. This is the backtrace of the process that get killed because
mpirun detect that the other one died ... What I need it's the backtrace
on the process which generate the segfault. Second, in order to understand
the backtrace, it's better to have run debug version of Open MPI. Without
> >> PS: Is there any way you can attach to the processes with gdb ? I
> >> would like to see the backtrace as showed by gdb in order
> to be able
> >> to figure out what's wrong there.
> >
I found out that all processes on the 2nd node crash so I just put a 30
second wait before MPI_Init in or
Rainer,
Thank you for taking time to reply to my querry. Do I understand
correctly that external32 data representation for i/o is not
implemented? I am puzzled since the MPI-2 standard clearly indicates
the existence of external32 and has lots of words regarding how nice
this feature is fo
On Mon, Jan 08, 2007 at 03:07:57PM -0500, Jeff Squyres wrote:
> if you're running in an ssh environment, you generally have 2 choices to
> attach serial debuggers:
>
> 1. Put a loop in your app that pauses until you can attach a
> debugger. Perhaps something like this:
>
> { int i = 0; prin
> >> PS: Is there any way you can attach to the processes with gdb ? I
> >> would like to see the backtrace as showed by gdb in order
> to be able
> >> to figure out what's wrong there.
> >
> > When I can get more detailed dbg, I'll send. Though I'm not
> clear on
> > what executable is being
On Jan 8, 2007, at 2:52 PM, Grobe, Gary L. ((JSC-EV))[ESCG] wrote:
I was wondering if someone could send me the HACKING file so I can
do a
bit more with debugging on the snapshots. Our web proxy has webdav
methods turned off (request methods fail) so that I can't get to the
latest of the svn r
I was wondering if someone could send me the HACKING file so I can do a
bit more with debugging on the snapshots. Our web proxy has webdav
methods turned off (request methods fail) so that I can't get to the
latest of the svn repos.
> Second thing. From one of your previous emails, I see that MX
Hello Tom,
like MPIch2, Open MPI also uses ROMIO as underlying MPI-IO implementation as
an mca. ROMIO implements the native datarep.
With best regards,
Rainer
On Friday 05 January 2007 20:38, l...@cora.nwra.com wrote:
> Hi,
>I am attempting to use the 'external32' data representation in ord
12 matches
Mail list logo