There is an MCA param that tells the orted to set its usage limits to the hard
limit:
MCA opal: parameter "opal_set_max_sys_limits" (current value:
<0>, data source: default value)
Set to non-zero to automatically set any
system-imposed limits to the ma
Hi,
We would like to set process memory limits (vmemoryuse, in csh
terms) on remote processes. Our batch system is torque/moab.
The nodes of our cluster each have 24GB of physical memory, of
which 4GB is taken up by the kernel and the root file system.
Note that these are diskless nodes, so no
When you use MPI message passing in your application, the MPI library
decides how to deliver the message. The "magic" is simply that when sender
process and receiver process are on the same node (shared memory domain)
the library uses shared memory to deliver the message from process to
process
On Oct 6, 2010, at 10:07 AM, Götz Waschk wrote:
>> Do -Wl,-rpath and -Wl,-soname= work any better?
> Yes, with these options, it build fine. But the command line is
> generated by libtool, so how can I make libtool to use -Wl, in front
> of the linker options? It seems to strip these from the comm
Currently we run a code on a cluster with distributed memory, and this code
needs a lot of memory. Part of the data stored in memory is the same for
each process, but it is organized as one array - we can split it if
necessary. So far no magic occurred for us. What do we need to do to make
the magi
On Wed, Oct 6, 2010 at 2:43 PM, Tim Prince wrote:
>> libtool: link: mpif90 -shared .libs/H5f90global.o
>> .libs/H5fortran_types.o .libs/H5_ff.o .libs/H5Aff.o .libs/H5Dff.o
>> .libs/H5Eff.o .libs/H5Fff.o .libs/H5Gff.o .libs/H5Iff.o .libs/H5Lff.o
>> .libs/H5Off.o .libs/H5Pff.o .libs/H5Rff.o .libs/H
On 10/6/2010 12:09 AM, Götz Waschk wrote:
libtool: link: mpif90 -shared .libs/H5f90global.o
.libs/H5fortran_types.o .libs/H5_ff.o .libs/H5Aff.o .libs/H5Dff.o
.libs/H5Eff.o .libs/H5Fff.o .libs/H5Gff.o .libs/H5Iff.o .libs/H5Lff.o
.libs/H5Off.o .libs/H5Pff.o .libs/H5Rff.o .libs/H5Sff.o .libs/H5Tff
Hi
I regularly use valgrind to check for leaks, but i ignore the leaks
clearly created by OpenMPI,
because i think most of them happen because of efficiency (lose no
time cleaning up unimportant leaks).
But i want to make sure no leaks come from my own apps.
In most of the cases, leaks i am respons
Open MPI will use shared memory to communicate between peers on the sane node -
but that's hidden beneath the covers; it's not exposed via the MPI API. You
just MPI-send and magic occurs and the receiver gets the message.
Sent from my PDA. No type good.
On Oct 4, 2010, at 11:13 AM, "Andrei Fo
This looks like something you should take up with the hdf5 people.
Sent from my PDA. No type good.
On Oct 6, 2010, at 3:09 AM, Götz Waschk wrote:
> Hi everyone,
>
> I'm trying to build hdf5 1.8.5-patch1 on RHEL5 using openmpi 1.4 and
> the Intel Compiler suite 11.0. I have Fortran, MPI and s
Hi everyone,
I'm trying to build hdf5 1.8.5-patch1 on RHEL5 using openmpi 1.4 and
the Intel Compiler suite 11.0. I have Fortran, MPI and shared library
support enabled. I get this error at linking stage:
/bin/sh ../../libtool --tag=FC --mode=link mpif90 -O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=
11 matches
Mail list logo