On Jan 4, 2006, at 7:24 PM, Anthony Chan wrote:
How about this -- an ISV asked me for a similar feature a little
while ago: if mpirun is invoked with an absolute pathname, then use
that base directory (minus the difference from $bindir) as an option
to an implicit --prefix.
(your suggestion may
Hi Jeff,
On Wed, 4 Jan 2006, Jeff Squyres wrote:
> Anthony --
>
> I'm really sorry; we just noticed this message today -- it got lost
> in the post-SC recovery/holiday craziness. :-(
I understand. :)
>
> Your request is fairly reasonable, but I wouldn't want to make it the
> default behavior.
On Jan 4, 2006, at 5:05 PM, Tom Rosmond wrote:
Thanks for the quick reply. I ran my tests with a hostfile with
cedar.reachone.com slots=4
I clearly misunderstood the role of the 'slots' parameter, because
when I removed it, OPENMPI slightly outperformed LAM, which I
assume it should. Thanks f
Anthony --
I'm really sorry; we just noticed this message today -- it got lost
in the post-SC recovery/holiday craziness. :-(
Your request is fairly reasonable, but I wouldn't want to make it the
default behavior. Specifically, I can envision some scenarios where
it might be problematic
Thanks for the quick reply. I ran my tests with a hostfile with
cedar.reachone.com slots=4
I clearly misunderstood the role of the 'slots' parameter, because
when I removed it, OPENMPI slightly outperformed LAM, which I
assume it should. Thanks for the help.
Tom
Brian Barrett wrote:
On Jan
Hi Tom,
users-requ...@open-mpi.org wrote:
I am pretty sure that LAM exploits the fact that the virtual processors
are all
sharing the same memory, so communication is via memory and/or the PCI bus
of the system, while my OPENMPI configuration doesn't exploit this. Is this
a reasonable diagnos
On Jan 4, 2006, at 4:24 PM, Tom Rosmond wrote:
I have been using LAM-MPI for many years on PC/Linux systems and
have been quite pleased with its performance. However, at the
urging of the
LAM-MPI website, I have decided to switch to OPENMPI. For much of my
preliminary testing I work on a si
Hello:
I have been using LAM-MPI for many years on PC/Linux systems and
have been quite pleased with its performance. However, at the urging of the
LAM-MPI website, I have decided to switch to OPENMPI. For much of my
preliminary testing I work on a single processor workstation (see the
attache
On Jan 4, 2006, at 2:08 PM, Anthony Chan wrote:
Either my program quits without writing the logfile (and without
complaining) or it crashes in MPI_Finalize. I get the message
"33 additional processes aborted (not shown)".
This is not MPE error message. If the logging crashes in
MPI_Finalize
On Wed, 4 Jan 2006, Carsten Kutzner wrote:
> On Tue, 3 Jan 2006, Anthony Chan wrote:
>
> > MPE/MPE2 logging (or clog/clog2) does not impose any limitation on the
> > number of processes. Could you explain what difficulty or error
> > message you encountered when using >32 processes ?
>
> Either
Thanks Carsten,
I have started updating my jumpshot so will let you know as soon as I
have some ideas on whats going on.
G.
ps. I am going offline now for 2 days while travelling
On Wed, 4 Jan 2006, Carsten Kutzner wrote:
Hi Graham,
here are the all-to-all test results with the modification
Hi Graham,
here are the all-to-all test results with the modification to the decision
routine you suggested yesterday. Now the routine behaves nicely for 128
and 256 float messages on 128 CPUs! For the other sizes one probably wants
to keep the original algorithm, since it is faster there. However
On Tue, 3 Jan 2006, Anthony Chan wrote:
> MPE/MPE2 logging (or clog/clog2) does not impose any limitation on the
> number of processes. Could you explain what difficulty or error
> message you encountered when using >32 processes ?
Either my program quits without writing the logfile (and without
Hi Peter,
We have HP ProCurve 2848 GigE switches here (48 port). The problem is more
severe the more nodes (=ports) are involved. It starts to show up at 16
ports for a limited range of message sizes and gets really bad for 32
nodes. The switch has a 96 Gbit/s backplane and should therefore be
abl
Hello Carsten,
Have you considered the possibility that this is the effect of a non-optimal
ethernet switch? I don't know how many nodes you need to reproduce it on or
if you even have physical access (and opportunity) but popping in another
decent 16-port switch for a testrun might be interest
15 matches
Mail list logo