Hi Chris
I have also implemented --npernode N now as well - it is in the trunk as of
r12826.
The testing you show below using mpiexec really doesn't tell us the story -
we need to know the rank of the various processes (and unfortunately,
hostname just tells us the host). There is no way to tell
Hi Chris
Some of this is doable with today's codeand one of these behaviors is
not. :-(
Open MPI/OpenRTE can be run in "persistent" mode - this allows multiple jobs
to share the same allocation. This works much as you describe (syntax is
slightly different, of course!) - the first mpirun will
Ralph,
I figured I should of run an mpi program ...here's what it does (seems
to be by-X-slot style):
---
$ /apps/x86_64/system/mpiexec-0.82/bin/mpiexec -npernode 2 mpi_hello
Hello, I am node an41 with rank 0
Hello, I am node an41 with rank 1
Hello, I am node an39 with rank 4
Hello, I am node an40
On Dec 11, 2006, at 4:04 PM, Reese Faucette wrote:
GM: gm_register_memory will be able to lock XXX pages (YYY MBytes)
Is there a way to tell GM to pull more memory from the system?
GM reserves all IOMMU space that the OS is willing to give it, so
what is
needed is a way to tell the OS and/o
Well I have no luck in finding a way to up the amount the system will
allow GM to use. What is a recommended solution? Is this even a
problem in most cases? Like am i encountering a corner case?
upping the limit was not what i'm suggesting as a fix, just pointing out
that it is kind of low an
On Tue, Dec 12, 2006 at 12:58:00PM -0800, Reese Faucette wrote:
> > Well I have no luck in finding a way to up the amount the system will
> > allow GM to use. What is a recommended solution? Is this even a
> > problem in most cases? Like am i encountering a corner case?
>
> upping the limit was
On Dec 12, 2006, at 4:24 PM, Gleb Natapov wrote:
On Tue, Dec 12, 2006 at 12:58:00PM -0800, Reese Faucette wrote:
Well I have no luck in finding a way to up the amount the system
will
allow GM to use. What is a recommended solution? Is this even a
problem in most cases? Like am i encounterin
Apologies if you received multiple copies of this message.
===
CALL FOR PAPERS
Workshop on Virtualization/Xen in High-Performance Cluster
and Grid Computing (XHPC'07)
as part of The 16th IEEE International Symposium on High
Perfo