Actually, tried compiling the RPM again, and at the very top, noticed
that the ./configure is called with --sysconfdir set to /opt/openmpi
instead of the new name provided. All other parameters are correct!
Any ideas?
./configure --build=x86_64-redhat-linux-gnu
--host=x86_64-redhat-linux-gnu --ta
Greetings,
The spec file provided in the latest stable src RPM makes it possible
to change the name of the resulting RPM. I tried to make use of that,
but ran into some issues. Specifically, the resulting RPM does not
have the etc directory (and sample config files in it). rpmbuild
complained abo
This was added on the trunk recently (btl_openib_if_[in|ex]clude),
but is not on the v1.2 branch.
On Jul 5, 2007, at 9:39 PM, Don Kerr wrote:
Does the OpenIB BTL have the notion of include and exclude of HCA's as
the TCP BTL does for NICs? E.G. "--mca btl_tcp_if_include
eth1,eth2 ..."
Does the OpenIB BTL have the notion of include and exclude of HCA's as
the TCP BTL does for NICs? E.G. "--mca btl_tcp_if_include eth1,eth2 ..."
I think not but I was not sure if this was accomplished some other way
so wanted to ask the group.
TIA
-DON
Hi,
I'm trying to get TotalView to work using OpenMPI with a simple
1-processor test program. I have tried building it using both OpenMPI
1.1.4 and 1.2.3, with the -g option. This is on two RedHat EL4 systems,
one a 32-bit system, and one a 64-bit system. Each executable is built
on its own system
On Mon, Jul 02, 2007 at 12:49:27PM -0500, Adams, Samuel D Contr AFRL/HEDR wrote:
> Anyway, so if anyone can tell me how I should configure my NFS server,
> or OpenMPI to make ROMIO work properly, I would appreciate it.
Well, as Jeff said, the only safe way to run NFS servers for ROMIO is
by dis
1.2.1 does NOT work for me.
-Original Message-
From: Jeff Squyres [mailto:jsquy...@cisco.com]
Sent: Thu 7/5/2007 2:39 AM
To: Open MPI Users
Subject: Re: [OMPI users] Absoft compilation problem
On Jul 2, 2007, at 7:31 PM, Yip, Elizabeth L wrote:
> I downloaded openmpi-1.2.3rc2r15098 fro
Hi Henk,
By specifying '--mca btl mx,self' you are telling Open MPI not to use
its shared memory support. If you want to use Open MPI's shared memory
support, you must add 'sm' to the list. I.e. '--mca btl mx,self'. If you
would rather use MX's shared memory support, instead use '--mca btl
mx
If the machine is multi-processor you might want to add the sm btl. That
cleared up some similar problems for me, though I don't use mx so your
millage may vary.
On 7/5/07, SLIM H.A. wrote:
Hello
I have compiled openmpi-1.2.3 with the --with-mx=
configuration and gcc compiler. On testing wi
Hello
I have compiled openmpi-1.2.3 with the --with-mx=
configuration and gcc compiler. On testing with 4-8 slots I get an error
message, the mx ports are busy:
>mpirun --mca btl mx,self -np 4 ./cpi
[node001:10071] mca_btl_mx_init: mx_open_endpoint() failed with
status=20
[node001:10074] mca_btl
Has requested:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1214711408 (LWP 23581)]
0xb7eb9d98 in opal_event_set ()
from /usr/local/share/openmpi-1.2.3.icc.ifort/lib/libopen-pal.so.0
(gdb) where
#0 0xb7eb9d98 in opal_event_set ()
from /usr/local/share/openm
There is another piece of information that can be useful in order to
figure out what's wrong. If you can execute the ompi_info directly in
gdb, run it until it segfault and then send us the output of "where"
and "shared" (both are gdb commands). This will give us access to the
exact locatio
On Thu, 5 Jul 2007, Jeff Squyres wrote:
Yoinks -- that's not good.
I suspect that our included memory manager is borking things up
(Brian: can you comment?). Can you try configuring OMPI --without-
memory-manager?
Yes. It compiles and links OK (execution of mpif90).
Trying to run (mpirun -n
Hi,
I'm using Open MPI in a real time rendering system and I need an accurate
latency control.
consider the 'Nagle' optimization implemented in the TCP/IP protocol, which
delays small packets for a short time period to possibly combine them with
successive packets generating network friendly pack
On Jul 2, 2007, at 7:31 PM, Yip, Elizabeth L wrote:
I downloaded openmpi-1.2.3rc2r15098 from your "nightly snapshot",
same problem.
I notice in version 1.1.2, you generate libmpi_f90.a instead of
the .so files.
Brian clarified for me off-list that we use the same LT for nightly
trunk and
Yoinks -- that's not good.
I suspect that our included memory manager is borking things up
(Brian: can you comment?). Can you try configuring OMPI --without-
memory-manager?
On Jul 4, 2007, at 5:52 PM, Ricardo Reis wrote:
From: Jeff Squyres
Can you be a bit more specific than "it die
On Jul 4, 2007, at 8:21 PM, Graham Jenkins wrote:
I'm using the openmpi-1.1.1-5.el5.x86_64 RPM on a Scientific Linux 5
cluster, with no installed HCAs. And a simple MPI job submitted to
that
cluster runs OK .. except that it issues messages for each node
like the
one shown below. Is t
17 matches
Mail list logo