On Apr 10, 2006, at 6:31 PM, Ralph Castain wrote:
Was this the only output you received? If so, then it looks like
your parent process never gets to spawn and bcast - you should have
seen your write statements first, yes?
Ralph
I only listed the ORTE errors, I get the correct output, comp
Good morning,
I'm trying to run one of the NAS Parallel Benchmarks (bt) with
OpenMPI-1.0.1 that was built with PGI 6.0. The code never
starts (at least I don't see any output) until I kill the code. Then
I get the following message:
[0,1,2][btl_tcp_endpoint.c:559:mca_btl_tcp_endpoint_complete_
Thanks Michael - we're looking into it and will get back to you shortly.
Ralph
Michael Kluskens wrote:
On Apr 10, 2006, at 6:31 PM, Ralph Castain wrote:
Was this the only output you received? If so, then it looks like
your parent process never gets to spawn and bcast - you sh
I suspect that to get this to work for bproc, then we will have to
build mpirun as 64-bit and the library as 32-bit. That's because a
32-bit compiled mpirun calls functions in the 32-bit /usr/lib/
libbroc.so which don't appear to function when the system is booted
64-bit.
Of course that w
Heterogeneous operations are not supported on 1.0 - they are, however,
on the new 1.1. :-)
Also, remember that you must configure for static operation for bproc -
use the configuration options "--enable-static --disable-shared". Our
current bproc launcher *really* dislikes shared libraries..
Unfortunately static-only will create binaries that will overwhelm
our machines. This is not a realistic option.
-david
On Apr 11, 2006, at 1:04 PM, Ralph Castain wrote:
Also, remember that you must configure for static operation for
bproc - use the configuration options "--enable-static -
I am trying to build OpenMPI v1.0.2 (stable) on an Opteron using the v8.1 Intel
EM64T compilers:
Intel(R) C Compiler for Intel(R) EM64T-based applications, Version 8.1 Build
20041123 Package ID: l_cce_pc_8.1.024
Intel(R) Fortran Compiler for Intel(R) EM64T-based applications, Version 8.1
Build
Unfortunately, that's all that is available at the moment. Future
releases (post 1.1) may get around this problem.
The issue is that the bproc launcher actually does a binary memory
image of the process, then replicates that across all the nodes. This
is how we were told to implement it origin
Ralph/all,
Ralph Castain wrote:
Unfortunately, that's all that is available at the moment. Future
releases (post 1.1) may get around this problem.
The issue is that the bproc launcher actually does a binary memory image
of the process, then replicates that across all the nodes. This is how
w
Thanks Ralph.
Was there a reason this functionality wasn't in from the start then?
LA-MPI works under bproc using shared libraries.
I know Bproc folks like to kill the notion of shared libs but they
are a fact of life we can't live without.
Just my $0.02.
-david
On Apr 11, 2006, at 1:2
Nothing nefarious - just some bad advice. Fortunately, as my other note
indicated, Tim and company already fixed this by revising the launcher.
Sorry for the confusion
Ralph
David Gunter wrote:
Thanks Ralph.
Was there a reason this functionality wasn't in from the start then?
LA-M
On Tue, 11 Apr 2006 13:19:43 -0600, Hugh Merz
wrote:
I couldn't find any other threads in the mailing list concerning usage
of the Intel EM64T compilers - has anyone successfully compiled OpenMPI
using this combination? It also occurs on the Athlon 64 processor.
Logs attached.
Thanks
On Tue, 11 Apr 2006 13:48:43 -0600, Troy Telford
wrote:
I have compiled Open MPI (on an Opteron) with the Intel 9 EM64T
compilers;
It's been a while since I've used the 8.1 series, but I'll give it a shot
with Intel 8.1 and tell you what happens.
Do you, perchance, have multiple TCP interfaces on at least one of the
nodes you're running on?
We had a mistake in the TCP network matching code during startup -- this
is fixed in v1.0.2. Can you give that a whirl?
> -Original Message-
> From: users-boun...@open-mpi.org
> [mailto:user
Well, yes these nodes do have multiple TCP interfaces.
I'll give 1.0.2 a whirl :)
Thanks!
Jeff
Do you, perchance, have multiple TCP interfaces on at least one of the
nodes you're running on?
We had a mistake in the TCP network matching code during startup -- this
is fixed in v1.0.2. Can you
15 matches
Mail list logo