[OMPI users] CfP Workshops on Virtualization/XEN in HPC Cluster and Grid Computing Environments (XHPC'07, VHPC'07)

2006-12-12 Thread Michael Alexander
Apologies if you received multiple copies of this message. === CALL FOR PAPERS Workshop on Virtualization/Xen in High-Performance Cluster and Grid Computing (XHPC'07) as part of The 16th IEEE International Symposium on High Perfo

Re: [OMPI users] mpool_gm_module error

2006-12-12 Thread Brock Palen
On Dec 12, 2006, at 4:24 PM, Gleb Natapov wrote: On Tue, Dec 12, 2006 at 12:58:00PM -0800, Reese Faucette wrote: Well I have no luck in finding a way to up the amount the system will allow GM to use. What is a recommended solution? Is this even a problem in most cases? Like am i encounterin

Re: [OMPI users] mpool_gm_module error

2006-12-12 Thread Gleb Natapov
On Tue, Dec 12, 2006 at 12:58:00PM -0800, Reese Faucette wrote: > > Well I have no luck in finding a way to up the amount the system will > > allow GM to use. What is a recommended solution? Is this even a > > problem in most cases? Like am i encountering a corner case? > > upping the limit was

Re: [OMPI users] mpool_gm_module error

2006-12-12 Thread Reese Faucette
Well I have no luck in finding a way to up the amount the system will allow GM to use. What is a recommended solution? Is this even a problem in most cases? Like am i encountering a corner case? upping the limit was not what i'm suggesting as a fix, just pointing out that it is kind of low an

Re: [OMPI users] mpool_gm_module error

2006-12-12 Thread Brock Palen
On Dec 11, 2006, at 4:04 PM, Reese Faucette wrote: GM: gm_register_memory will be able to lock XXX pages (YYY MBytes) Is there a way to tell GM to pull more memory from the system? GM reserves all IOMMU space that the OS is willing to give it, so what is needed is a way to tell the OS and/o

Re: [OMPI users] Pernode request

2006-12-12 Thread Maestas, Christopher Daniel
Ralph, I figured I should of run an mpi program ...here's what it does (seems to be by-X-slot style): --- $ /apps/x86_64/system/mpiexec-0.82/bin/mpiexec -npernode 2 mpi_hello Hello, I am node an41 with rank 0 Hello, I am node an41 with rank 1 Hello, I am node an39 with rank 4 Hello, I am node an40

Re: [OMPI users] Multiple mpiexec's within a job (schedule within a scheduled machinefile/job allocation)

2006-12-12 Thread Ralph Castain
Hi Chris Some of this is doable with today's codeand one of these behaviors is not. :-( Open MPI/OpenRTE can be run in "persistent" mode - this allows multiple jobs to share the same allocation. This works much as you describe (syntax is slightly different, of course!) - the first mpirun will

Re: [OMPI users] Pernode request

2006-12-12 Thread Ralph Castain
Hi Chris I have also implemented --npernode N now as well - it is in the trunk as of r12826. The testing you show below using mpiexec really doesn't tell us the story - we need to know the rank of the various processes (and unfortunately, hostname just tells us the host). There is no way to tell