I have not seen this before -- did you look in the libtool
documentation? ("See the libtool documentation for more information.")
On Jun 19, 2007, at 6:46 PM, Andrew Friedley wrote:
I'm trying to build Open MPI v1.2.2 with gcc/g++/g77 3.4.4 and
pathf90 v2.4 on a linux system, and see this e
I'm trying to build Open MPI v1.2.2 with gcc/g++/g77 3.4.4 and pathf90
v2.4 on a linux system, and see this error when compiling ompi_info:
/bin/sh ../../../libtool --tag=CXX --mode=link g++ -g -O2
-finline-functions -pthread -export-dynamic -o ompi_info
components.o ompi_info.o output.o p
On Jun 19, 2007, at 2:24 PM, George Bosilca wrote:
While limiting the ports used by Open MPI might be a good idea, I'm
skeptical about it. For at least 2 reasons:
1. I don't believe the OS to release the binding when we close the
socket. As an example on Linux the kernel sockets are release
Hi,
You should definitely try everything people before me mentioned.
Also, try running single process per node - and see if it happens.
I do not have some great insight about this issue - but I did have similar
problem in March. Unfortunately it went away (don't remember how - either
by me qu
The deadlock happens with or without your patch ? If it's with your
patch, the problem might come from the fact that you start 2
processes on each node and you will share the port range (because of
your patch).
Please re-run either with 2 processes by node but without your patch
or with o
On Jun 19, 2007, at 10:40 AM, Jeff Squyres wrote:
From the looks of the patch, it looks like you just want Open MPI to
restrict itself to a specific range of ports, right? If that's the
case, we'd probably do this slightly differently (with MCA parameters
-- we certainly wouldn't want to forc
On Jun 19, 2007, at 9:18 AM, Chris Reeves wrote:
I've had a look through the FAQ and searched the list archives and
can't find
any similar problems to this one.
I'm running OpenMPI 1.2.2 on 10 Intel iMacs (Intel Core2 Duo CPU).
I am
specifiying two slots per machine and starting my job wit
Hi,
I have compiled OpenMPI 1.2.2 with the "--enable-mpirun-prefix-by-default"
option to avoid users having to set their LD_LIBRARY_PATH.
This works fine for compute nodes where users are allowed to login.
Users are not allowed to login to our production clusters directly.
Instead, they have to
(This time with attachments...)
Hi there,
I've had a look through the FAQ and searched the list archives and can't find
any similar problems to this one.
I'm running OpenMPI 1.2.2 on 10 Intel iMacs (Intel Core2 Duo CPU). I am
specifiying two slots per machine and starting my job with:
/Network/
Hi there,
I've had a look through the FAQ and searched the list archives and can't find
any similar problems to this one.
I'm running OpenMPI 1.2.2 on 10 Intel iMacs (Intel Core2 Duo CPU). I am
specifiying two slots per machine and starting my job with:
/Network/Guanine/csr201/local-i386/opt/ope
10 matches
Mail list logo