Nothing nefarious - just some bad advice. Fortunately, as my other note
indicated, Tim and company already fixed this by revising the launcher.
Sorry for the confusion
Ralph
David Gunter wrote:
Thanks Ralph.
Was there a reason this functionality wasn't in from the start then?
LA-M
Thanks Ralph.
Was there a reason this functionality wasn't in from the start then?
LA-MPI works under bproc using shared libraries.
I know Bproc folks like to kill the notion of shared libs but they
are a fact of life we can't live without.
Just my $0.02.
-david
On Apr 11, 2006, at 1:2
Ralph/all,
Ralph Castain wrote:
Unfortunately, that's all that is available at the moment. Future
releases (post 1.1) may get around this problem.
The issue is that the bproc launcher actually does a binary memory image
of the process, then replicates that across all the nodes. This is how
w
Unfortunately, that's all that is available at the moment. Future
releases (post 1.1) may get around this problem.
The issue is that the bproc launcher actually does a binary memory
image of the process, then replicates that across all the nodes. This
is how we were told to implement it origin
Unfortunately static-only will create binaries that will overwhelm
our machines. This is not a realistic option.
-david
On Apr 11, 2006, at 1:04 PM, Ralph Castain wrote:
Also, remember that you must configure for static operation for
bproc - use the configuration options "--enable-static -
Heterogeneous operations are not supported on 1.0 - they are, however,
on the new 1.1. :-)
Also, remember that you must configure for static operation for bproc -
use the configuration options "--enable-static --disable-shared". Our
current bproc launcher *really* dislikes shared libraries..
I suspect that to get this to work for bproc, then we will have to
build mpirun as 64-bit and the library as 32-bit. That's because a
32-bit compiled mpirun calls functions in the 32-bit /usr/lib/
libbroc.so which don't appear to function when the system is booted
64-bit.
Of course that w
On Apr 10, 2006, at 11:07 AM, David Gunter wrote:
(flashc 105%) mpiexec -n 4 ./send4
[flashc.lanl.gov:09921] mca: base: component_find: unable to open: /
lib/libc.so.6: version `GLIBC_2.3.4' not found (required by /net/
scratch1/dog/flash64/openmpi/openmpi-1.0.2-32b/lib/openmpi/
mca_paffinity_li
Here are the results using mpicc:
(ffe-64 153%) mpicc -o send4 send4.c
/usr/bin/ld: skipping incompatible /net/scratch1/dog/flash64/openmpi/
openmpi-1.0.2-32b/lib/libmpi.so when searching for -lmpi
/usr/bin/ld: cannot find -lmpi
collect2: ld returned 1 exit status
(ffe-64 154%) mpicc -showme
g
For Linux, this isn't too big of a problem, but you might want to
take a look at the output of "mpicc -showme" to get an idea of what
compiler flags / libraries would be added if you used the wrapper
compilers. I think for Linux the only one that might at all matter
is -pthread.
But I di
The problem with doing it that way is that is disallows our in-hose
code teams from using their compilers of choice. Prior to open-mpi we
have been using LA-MPI. LA-MPI has always been compiled in such a
way that it wouldn't matter what other compilers were used to build
mpi applications p
I've attached the config.log and configure output files. The OS on
the machine is
(flashc 119%) cat /etc/redhat-release
Red Hat Linux release 9 (Shrike)
(flashc 120%) uname -a
Linux flashc.lanl.gov 2.4.24-cm32lnxi6plsd2pcsmp #1 SMP Thu Mar 10
15:27:12 MST 2005 i686 athlon i386 GNU/Linux
-
I'm not an expert on the configure system, but one thing jumps out at
me immediately - you used "gcc" to compile your program. You really
need to use "mpicc" to do so.
I think that might be the source of your errors.
Ralph
David Gunter wrote:
After much fiddling around, I managed to crea
On Apr 10, 2006, at 9:43 AM, David Gunter wrote:
After much fiddling around, I managed to create a version of open-mpi
that would actually build. Unfortunately, I can't run the simplest
of applications with it. Here's the setup I used:
export CC=gcc
export CXX=g++
export FC=gfortran
export F7
After much fiddling around, I managed to create a version of open-mpi
that would actually build. Unfortunately, I can't run the simplest
of applications with it. Here's the setup I used:
export CC=gcc
export CXX=g++
export FC=gfortran
export F77=gfortran
export CFLAGS="-m32"
export CXXFLAGS
I am trying to build a 32-bit compatible OpenMPI for our 64-bit Bproc
Opteron systems. I saw the thread from last August-September 2005
regarding this but didn't see where it ever succeeded or if any of
the problems had been fixed. Most importantly, romio is required to
work as well.
I
16 matches
Mail list logo