Aha - I think we have a misunderstanding. Please see comments below:
On 11/28/06 8:14 PM, "Maestas, Christopher Daniel"
wrote:
> Thanks for the feedback Ralph. Comments below.
>
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of
Thanks for the feedback Ralph. Comments below.
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Ralph Castain
Sent: Tuesday, November 28, 2006 3:27 PM
To: Open MPI Users
Subject: Re: [OMPI users] Pernode request
We already have --perno
In the machinefile, add for each node with M cpus:
myhost@mydomain slots=N cpus_allowed=,
being the subset of 0..M-1 in some yours-to-decide format and
with yours-to-decide default values.
Best Regards,
Alexander Shaposhnikov
On Wednesday 29 November 2006 06:16, Jeff Squyres wrote:
> There is
Jeff (and everybody else)
First of all, pardon me if this is a stupid comment; I am learning the
nuts-and-bolts of parallel programming; but my comment is as follows:
Why can't this be done *outside* openMPI, by calling Linux's processor
affinity APIs directly? I work with a blade server kind of
There is not, right now. However, this is mainly because back when I
implemented the processor affinity stuff in OMPI (well over a year
ago), no one had any opinions on exactly what interface to expose to
the use. :-)
So right now there's only this lame control:
http://www.open-mpi.o
Tony -
It looks like you ran into a bug in Libtool. Unfortunately, in order
to better support Fortran 90 when building shared libraries, we use a
beta version of Libtool 2, which means we're living a bit more on the
edge than we'd like. I have a patch for this issue that I'll be
submitt
We already have --pernode, which spawns one process per node. You can also
launch one process/slot across all available slots by simply not specifying
the number of processes.
I gather this option would say "spawn N procs/node across all nodes" - I can
understand the possible usefulness, but I'm n
I recently saw this on the mpiexec mailing list and pondered that this
would be a useful feature for Open MPI as well. :-)
I can't seem to enter a trac ticket and seem to be having issues w/ my
browser at the moment, but wanted to get this out there.
---
> > > 1) mpiexec already has "-pernode" but
I have recently completed a number of performance tests on a Beowulf
cluster, using up to 48 dual-core P4D nodes, connected by an Extreme
Networks Gigabit edge switch. The tests consist of single and multi-node
application benchmarks, including DLPOLY, GROMACS, and VASP, as well as
specific tests o