Thanks Peter,
We'll look into this...
Tim
Peter Kjellström wrote:
Hello,
I'm playing with a copy of svn7132 that built and installed just fine. At
first everything seemed ok, unlike earlier it now runs on mvapi
automagically :-)
But then a small testprogram failed and then another. After
Hello Daryl,
I believe there is a problem w/ the latest version of the bproc launcher...
Try running w/ the following to use an older version:
mpirun -mca pls_bproc_seed_priority 101
This could also be set in your system default or local MCA
parameter file.
Thanks,
Tim
Daryl W. Grunau wrot
Daryl,
I'm seeing the same error messages - we'll look into this. Does
the mvapi port appear to be working for you. W/ a debug build
I'm seeing:
Latency:
mpi-ping: ping-pong
nprocs=2, reps=1, min bytes=0, max bytes=0 inc bytes=0
0 pings 1
0 pinged 1:0 bytes 7.47 uSec 0.
Daryl,
This should be fixed in svn.
Thanks,
Tim
Daryl W. Grunau wrote:
Hi, I downloaded/installed version 1.0a1r7337 configured to run on my BProcV4
IB cluster (mvapi, for now). Upon execution, I get the following warning
message, however the app appears to run to completion afterwards:
Daryl,
Tim, the latest nightly fixes this - thanks! Can I report another? I
can't seem to specify -H|-host|--host ; mpirun seems to ignore the
argument:
% mpirun -np 2 -H 0,4 ./cpi
Process 0 on n0
Process 1 on n1
pi is approximately 3.1416009869231241, Error is 0.08
r the time being.
On Sep 19, 2005, at 6:12 PM, Tim S. Woodall wrote:
Daryl,
Tim, the latest nightly fixes this - thanks! Can I report another? I
can't seem to specify -H|-host|--host ; mpirun seems to ignore
the
argument:
% mpirun -np 2 -H 0,4 ./cpi
Process 0 on n0
Proce
Daryl,
Try setting:
-mca btl_base_include self,mvapi
To specify that only lookback (self) and mvapi btls should be used.
Can you forward me the config.log from your build?
Thanks,
Tim
Daryl W. Grunau wrote:
Hi, I've got a dual-homed IB + GigE connected cluster for which I've built
a very re
Hello Chris,
Please give the next release candidate a try. There was an issue
w/ the GM port that was likely causing this.
Thanks,
Tim
Parrott, Chris wrote:
Greetings,
I have been testing OpenMPI 1.0rc3 on a rack of 8 2-processor (single
core) Opteron systems connected via both Gigabit Ether
Hi Jeff,
I installed two versions of open mpi slightly different. One on
/opt/openmpi or I would get the gfortran error and the other in
/home/allan/openmpi
However I do not think that is the problem as the path names are
specified in the bahrc and bash_profile files of the /home/allan dire
Mike,
Mike Houston wrote:
We can't seem to run across TCP. We did a default 'configure'. Shared
memory seems to work, but trying tcp give us:
[0,1,1][btl_tcp_endpoint.c:557:mca_btl_tcp_endpoint_complete_connect]
connect() failed with errno=113
This error indicates the IP address exporte
This error indicates the IP address exported by the peer is not reachable.
You can use the tcp btl parameters:
-mca btl_tcp_include eth0,eth1
or
-mca btl_tcp_exclude eth1
To specify the set of interfaces to use/not use.
George was correct - these should be btl_tcp_if_include/btl_tcp_if_
Hello Mike,
Mike Houston wrote:
When only sending a few messages, we get reasonably good IB performance,
~500MB/s (MVAPICH is 850MB/s). However, if I crank the number of
messages up, we drop to 3MB/s(!!!). This is with the OSU NBCL
mpi_bandwidth test. We are running Mellanox IB Gold 1.8 wit
p performance. Now if I can get the tcp layer working,
I'm pretty much good to go.
Any word on an SDP layer? I can probably modify the tcp layer quickly
to do SDP, but I thought I would ask.
-Mike
Tim S. Woodall wrote:
Hello Mike,
Mike Houston wrote:
When only sending a fe
Mike,
Let me confirm this was the issue and look at the TCP problem as well.
Will let you know.
Thanks,
Tim
Mike Houston wrote:
What's the ETA, or should I try grabbing from cvs?
-Mike
Tim S. Woodall wrote:
Mike,
I believe was probably corrected today and should be in the
next re
r should I try grabbing from cvs?
-Mike
Tim S. Woodall wrote:
Mike,
I believe was probably corrected today and should be in the
next release candidate.
Thanks,
Tim
Mike Houston wrote:
Woops, spoke to soon. The performance quoted was not actually going
between nodes. Actually using the
Any word on an SDP layer? I can probably modify the tcp layer quickly
to do SDP, but I thought I would ask.
-Mike
Tim S. Woodall wrote:
Hello Mike,
Mike Houston wrote:
When only sending a few messages, we get reasonably good IB performance,
~500MB/s (MVAPICH is 850MB/s). H
Hello John,
You need to specify both --enable-static and --disable-shared to do a static
build (not sure why, perhaps someone else can fill us in on that)...
The logs indicate the launch is failing trying to start orted on the backend
node... probably due to shared library dependencies.
You mig
John,
Any progress on this?
John Ouellette wrote:
Hi Tim,
H, nope. I recompiled OpenMPI to produce the static libs, and even
recompiled my app statically, and received the same error messages.
If orted isn't starting on the compute nodes, is there any way I can debug
this to find out
Daryl,
Try this:
Original Message
Subject: RE: only root running mpi jobs with 1.0.1rc5
List-Post: users@lists.open-mpi.org
Date: Thu, 01 Dec 2005 18:49:46 -0700
From: Joshua Aune
Reply-To: lu...@lnxi.com
Organization: Linux Networx
To: Todd Wilde
CC: Matthew Finlay , twood.
Hello Emanuel,
You might want to try an actual hard limit, say 8GB, rather than
unlimited. I've run into issues w/ unlimited in the past.
Thanks,
Tim
Emanuel Ziegler wrote:
Hi!
After solving my last problem with the help of this list (thanks
again :) I encountered another problem regarding t
Ralph/all,
Ralph Castain wrote:
Unfortunately, that's all that is available at the moment. Future
releases (post 1.1) may get around this problem.
The issue is that the bproc launcher actually does a binary memory image
of the process, then replicates that across all the nodes. This is how
w
21 matches
Mail list logo