Does openmpi mpirun command have the equivalent option "-O" as lam
(homogeneous universe)
I would like avoid automatic byteswap in heterogeneous execution
environment
Thanks in advance
Geoffroy
<>
Open MPI figures out that peers are homogeneous automatically;
there's no need for a LAM-like -O option to mpirun.
FWIW: recent versions of LAM (7.1 and beyond? I don't remember when
the feature was introduced offhand) automatically figure out when
you're in a homogeneous environment and s
I guess this is a question for Sun: what happens if registered memory
is not freed after a process exits? Does the kernel leave it allocated?
On Aug 6, 2007, at 7:00 PM, Glenn Carver wrote:
Just to clarify, the MPI applications exit cleanly. We have our own
f90 code (in various configuratio
I will run some tests to check out this possibility.
-DON
Jeff Squyres wrote:
I guess this is a question for Sun: what happens if registered memory
is not freed after a process exits? Does the kernel leave it allocated?
On Aug 6, 2007, at 7:00 PM, Glenn Carver wrote:
Just to clarify,
Glenn,
While I look into the possibility of registered memory not being freed
could you run your same tests but without shared memory or udapl.
"--mca btl self,tcp"
If this is successful, i.e. frees memory as expected. The next step
would be to run including shared memory, "--mca btl self,s
Hello dear Brock and Graham,
we have had NAMD on our clusters with Open MPI. look for any file
conv-mach.sh; for the configuration mpi-linux, or in Your case
mpi-linux-amd64 this contains the superfluous -lmpich.
With best regards,
Rainer
On Tuesday 07 August 2007 04:11, Graham Jenkins wrote:
>
That would be right, this is my NAMD_2.6_Source/arch/Linux-amd64-
MPI.arch:
NAMD_ARCH = Linux-amd64
CHARMARCH = mpi-linux-amd64
CXX = mpiCC
#CXXOPTS = -O3 -m64 -fexpensive-optimizations -ffast-math
CXXOPTS = -fastsse -O3 -Minfo -fPIC
#CC = gcc
CC = mpicc
#COPTS = -O3 -m64 -fexpensive-optimizati
I'm trying to make work the pathscale fortran compiler with OpenMPI on a 64bit
Linux machine and can't get passed a simple demo program. Here is detailed info:
pathf90 -v
PathScale EKOPath(TM) Compiler Suite: Version 2.5
Built on: 2006-08-22 21:02:51 -0700
Thread model: posix
GNU gcc version 3.3.
Have you setup your LD_LIBRARY_PATH variable correctly? See this FAQ entry:
http://www.open-mpi.org/faq/?category=running#adding-ompi-to-path
Hope this helps,
Tim
Michael Komm wrote:
I'm trying to make work the pathscale fortran compiler with OpenMPI on a 64bit
Linux machine and can't get pas
Hi Michael,
you have to add the path to the openmpi libraries to the LD_LIBRARY_PATH
variable
export LD_LIBRARY_PATH=/home/fort/usr//lib
should fix the problem.
Bye,
Christian
Michael Komm wrote:
I'm trying to make work the pathscale fortran compiler with OpenMPI on a 64bit
Linux machine a
Thanks Christian it works just fine now!
I altered LIBRARY_PATH and LD_PATH but not this one :)
Michael
__
> Od: christian.bec...@math.uni-dortmund.de
> Komu: Open MPI Users
> Datum: 07.08.2007 19:32
Don,
Following up on this, here are the results of the tests. All is well
until udapl is included. In addition there are no mca parameters set
in these jobs. As I reported to you before, if I add --mca
btl_udapl_flags=1, the memory problem goes away.
The batch jobs run vmstat before and aft
12 matches
Mail list logo