Dear George and Andrew,
My qlwfpc2 code runs as expected on my Apple dual G5 tower. On my 5-
node Beowulf it is misbehaving.
A 5 Mb file has become bloated to over 2 Gb of garbage. This I think
explains the local host exiting
problems since the system seems to be aborting after the output fi
I've been trying out the RC4 builds of OpenMPI; I've been using Myrinet
(gm), Infiniband (mvapi), and TCP.
When running a benchmark such as IMB (formerly PALLAS, IIRC), or even a
simple hello world, there are no problems.
However, when running HPL (and HPCC, which is a superset of HPL), I h
Dear George,
The patch got malformed when posted. But I did figure out what was
meant.
It turns out that 3 files had to be fixed:
opal/runtime/opal_init.c
orte/runtime/orte_init_stage1.c
orte/runtime/orte_init_stage2.c
in the same way:
[mighell@asterix openmpi-1.0rc4]$ diff -u opal/runtim
I just committed another fix to the trunk for a problem you are going
to run into next - the same problem comes up again in two more places.
I'll ask Tim/Jeff to apply this fix to the v1.0 branch, here are
patches:
Index: orte/runtime/orte_init_stage1.c
==
Ken,
Please apply the following patch (from your /home/mighell/pkg/ompi/
openmpi-1.0rc4/ base directory).
Index: opal/runtime/opal_init.c
===
--- opal/runtime/opal_init.c(revision 7831)
+++ opal/runtime/opal_init.c(working
Dear OpenMPI,I tried to build 1.0rc4 on a 3 year old 5-node Beowulf cluster running RedHat Linux 7.3. The build failed duringmake all; the last few lines of the log file are:mkdir .libs gcc -DHAVE_CONFIG_H -I. -I. -I../../include -I../../include -I../../src/event -I../../include -I../.. -I../.. -I.
I'm sorry about my previous post. It turns out that was an
experiment of mine where I created a dynamic library for libmpi_f90,
which doesn't happen normally. My test example now runs, but I still
have problems with PETSc:
make test
Running test examples to verify correct installation
/Us
Hi,
I am testing out Open MPI on a Mac running 10.4, using the Apple gnu
compilers plus fink-installed g95. I was running into problems
building PETSc using mpif90 as my fortran compiler, so I tried a
simple test on a trivial fortran example and got the same results.
Here is what happen
> My points here were that for at least some debuggers, a
> naming scheme is all they need, and we should be able to accommodate
> that.
Yes, it seems that some advanced "renaming scheme" will fit most of needs.
(I mean if it allows to customize not only debugger name & path, but also
cmd-line op
On Fri, 2005-10-21 at 12:41 +0400, Konstantin Karganov wrote:
> > You and Chris G. raise a good point -- another parallel debugger vendor
> > has contacted me about the same issue (their debugger does not have an
> > executable named "totalview").
> > <...>
> > Comments?
> Actually, the point is
> The question was merely how to do it: call "gdb orterun" and catch it
> somewhere on breakpoint or attach to orterun later or smth else.
Reply to myself:
# gdb orterun
(gdb) br MPIR_Breakpoint
(gdb) run
(gdb)
(gdb) detach
(gdb) exit
Am I right?
Best regards,
Konstantin.
> You and Chris G. raise a good point -- another parallel debugger vendor
> has contacted me about the same issue (their debugger does not have an
> executable named "totalview").
> <...>
> Comments?
Actually, the point is deeper than just a debugger naming question.
High-quality MPI implementat
12 matches
Mail list logo