Very interesting, I certainly hope that my problem is this and not
some kind of error. I'll put the program on some more nodes and run
some tests and see what runs fastest.
My only experience so far with MPI is with LAMMPS, and the simulation
I ran had an almost linear speedup from 1 -> 10 m
I compiled openmpi-1.2.2 with:
./configure CFLAGS=-g -pg -O3
--prefix=/home/foo/490_research/490/src/mpi.optimized_profiling/ \
--enable-mpi-threads --enable-progress-threads --enable-static
--disable-shared --without-memory-manager \
--without-libnuma --disable-mpi-f77 --disable-mpi-f90 --disa
Would it be helpful if we provided some way to link in all the MPI
language bindings?
Examples off the top of my head (haven't thought any of these through):
- mpicxx_all ...
- setenv OMPI_WRAPPER_WANT_ALL_LANGUAGE_BINDINGS
mpicxx ...
- mpicxx -ompi:all_languages ...
On Jun 6, 2007, at 12:
On Fri, 8 Jun 2007, Jeff Squyres wrote:
> Would it be helpful if we provided some way to link in all the MPI
> language bindings?
>
> Examples off the top of my head (haven't thought any of these through):
>
> - mpicxx_all ...
> - setenv OMPI_WRAPPER_WANT_ALL_LANGUAGE_BINDINGS
>mpicxx ...
>
Hi,
I uninstalled and deleted our old installation directories of 1.1.4 and
1.2.1 so I could have it nice and clean for 1.2.2. I extracted the
source and ran configure with these options:
--prefix=/opt/openmpi/st --with-devel-headers --with-tm=/opt/torque
I then build and installed it. B
"File not found" is the strerror corresponding to the error we get
when we call dlopen. So I don't think it's directly related to the
mca_pls_tm.so library but to one of it's missing dependencies.
Do you have access to the /opt/torque directory on all nodes in your
cluster ?
george.
On
Yes. But the /opt/torque directory is just the source, not the actual
installed directory. The actual installed directory on the head node is
the default location of /usr/lib/something. And that is not accessable
by every node.
But should it matter if it's not accessable if I don't specify
--wi
On Jun 8, 2007, at 2:06 PM, Cupp, Matthew R wrote:
Yes. But the /opt/torque directory is just the source, not the actual
installed directory. The actual installed directory on the head
node is
the default location of /usr/lib/something. And that is not
accessable
by every node.
But shou
On Jun 8, 2007, at 9:29 AM, Code Master wrote:
I compiled openmpi-1.2.2 with:
./configure CFLAGS=-g -pg -O3 --prefix=/home/foo/490_research/490/
src/mpi.optimized_profiling/ \
--enable-mpi-threads --enable-progress-threads --enable-static --
disable-shared --without-memory-manager \
--witho
The answer is "it depends"; there's a lot of factors involved.
- What is the topology of your network?
- Where do processes land within the topology of the network?
- What interconnect are you using? (e.g., the openib BTL will
usually use short message RDMA to a limited set of peers as an
op
On Jun 5, 2007, at 10:27 AM, Prakash Velayutham wrote:
I know. I could not start another client code before this. So just
wanted to check if /bin/hostname works with the spawn.
It will not. MPI_COMM_SPAWN assumes that you are spawning an MPI
application and therefore after the process is la
So I either have to uninstall torque, make the shared libraries
available on all nodes, or have torque as static libraries on the head
node?
__
Matt Cupp
Battelle Memorial Institute
Statistics and Information Analysis
-Original Message-
From: users-boun...@ope
Or tell Open MPI not to build torque support, which can be done at
configure time with the --without-tm option.
Open MPI tries to build support for whatever it finds in the default
search paths, plus whatever things you specify the location of. Most
of the time, this is what the user wants
My apologies - Prakash and I solved this off-list. I should have posted the
final solution here too so any interested parties would know the answer.
The problem actually is a bug that broke comm_spawn in 1.2.2 and may well be
present in the entire 1.2 code series (I have not checked the prior
sub-
A fix for this problem is now available on the trunk. Please use any
revision after 14963 and your problem will vanish [I hope!]. There
are now some additional parameters which allow you to select which
Myrinet network you want to use in the case there are several
available (--mca btl_mx_if
On 6/9/07, Jeff Squyres wrote:
On Jun 8, 2007, at 9:29 AM, Code Master wrote:
> I compiled openmpi-1.2.2 with:
>
> ./configure CFLAGS=-g -pg -O3 --prefix=/home/foo/490_research/490/
> src/mpi.optimized_profiling/ \
> --enable-mpi-threads --enable-progress-threads --enable-static --
> disable-s
16 matches
Mail list logo