i'm not a developer & am used to linking to libraries
using the -l flag. this only appears to work for .a
files. openmpi generates .la files. how do i link
the these?
__
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection aroun
Easiest method is just to use the "mpicc" command to compile your code. It
will automatically link you to the right libraries, include directories,
etc. You can check the $prefix/bin directory to see all the compiler
wrappers we provide.
Ralph
On 10/26/06 7:12 AM, "shane kennedy" wrote:
> i'm
Hi,
I developped a launcher application :
a MPI application (say main_exe) lauches 2 MPI applications (say exe1 and
exe2), using MPI_Comm_spawn_multiple.
Now, I'm looking at the behavior when an exe crashes.
What I can see is the following :
1) when everybody is launched, I see the following
I recently switched to OpenMPI (v1.1.1) from LAM/MPI. My application
runs at approximately 1/4th the speed of the same program running
under LAM. Let me explain my setup.
The program is executed as 16 processes on 8 dual-processor Apple
Xserve Nodes with one gigabit card (per node) interf
I've recently had the chance to see how Open MPI (as well as other MPIs)
behave in the case of network failure.
I've looked at what happens when a node has its network connection
disconnected in the middle of a job, with Ethernet, Myrinet (GM), and
InfiniBand (OpenIB).
With Ethernet and M
Hi all,
I've compiled open-mpi 1.1.2 in 64bit mode (using XCode 2.4 / i686-
apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 (Apple Computer, Inc. build
5363)) with
./configure --prefix=/usr/local/openmpi-1.1.2 --enable-debug CFLAGS=-
m64 CXXFLAGS=-m64 OBJCFLAGS=-m64 LDLFLAGS=-m64
on an Intel Mac Pro (wi
The Open MPI behavior is the same independently of the network used
for the job. At least the behavior dictated by our internal message
passing layer. But, for this to happens we should get a warning from
the network that something is wrong (such a timeout). In the case of
TCP (and Myrinet)
If you wouldn't mind, could you try it again after applying the attached
patch? This looks like a problem we encountered on another release where
something in the runtime didn't get initialized early enough. It only shows
up in certain circumstances, but this seems to fix it.
You can apply the pat
On 26.10.2006, at 23:12, Ralph H Castain wrote:
If you wouldn't mind, could you try it again after applying the
attached
patch? This looks like a problem we encountered on another release
where
something in the runtime didn't get initialized early enough. It
only shows
up in certain circu
Okay - sorry it didn't work, but thought it worth a shot. Let us know what
you find.
Ralph
On 10/26/06 3:44 PM, "Daniel Vollmer" wrote:
>
> On 26.10.2006, at 23:12, Ralph H Castain wrote:
>
>> If you wouldn't mind, could you try it again after applying the
>> attached
>> patch? This looks li
1) I think OpenMPI does not use optimal algorithms for collectives. But
neither does LAM. For example the MPI_Allreduce scales as log_2 N where N is
the number of processors. MPICH uses optimized collectives and the
MPI_Allreduce is essentially independent of N. Unfortunately MPICH has never
had a
As an alternate suggestion (although George's is better, since this will
affect your entire network connectivity), you could override the default TCP
timeout values with the "sysctl -w" command.
The following three OIDs affect TCP timeout behavior under Linux:
net.ipv4.tcp_keepalive_intvl = 75 <-
There are 2 different collectives in Open MPI. One is a basic
implementation and one is highly optimized. The only problem is that
we optimized them based on the network, number of nodes and message
size. As you can imagine ... not all the networks are the same ...
which lead to troubles on
Moreover ... you have to have the admin right in order to modify
these parameters. If it's the case, there is a trick for MX too. One
can recompile it, with a different timeout (recompilation is required
as far as I remember). Grep for timeout in the MX sources and you
will find out how to
How about changing the default error handler ?
It is not supposed to work, and if you find an MPI implementation
that support this approach please tell me. I know the paper where you
read about this, but even with their MPI library this approach does
not work.
Soon, Open MPI will be able
On Thu, 26 Oct 2006 15:11:46 -0600, George Bosilca
wrote:
The Open MPI behavior is the same independently of the network used
for the job. At least the behavior dictated by our internal message
passing layer.
Which is one of the things I like about Open MPI.
There is nothing (that has a r
16 matches
Mail list logo