Hello,
Sometimes we have users that like to do from within a single job (think
schedule within an job scheduler allocation):
"mpiexec -n X myprog"
"mpiexec -n Y myprog2"
Does mpiexec within Open MPI keep track of the node list it is using if
it binds to a particular scheduler?
For
Hello,
I looked into this a few months back, although we have had OK
luck using LAM with R/MPI. I emailed the author of R/MPI, Dr.
Hao Yu and his answer was:
Date: Mon, 25 Sep 2006 21:57:40 -0500
From: Hao Yu
To: "Caird, Andrew J"
Subject: Re: Rmpi and OpenMPI?
Hi Andy,
Sorry for my sl
Hello Ralph,
This is great news! Thanks for doing this. I will try and get around
to it soon before the holiday break.
The allocation scheme always seems to get to me. From what you describe
that is how I would have seen it. As I've gotten to know osc mpiexec
through the years I think they li
Hi Chris
Okay, we have modified the pernode behavior as you requested (on the trunk
as of r12821)- give it a shot and see if that does it. I have not yet added
the npernode option, but hope to get that soon.
I have a question for you about the npernode option. I am assuming that you
want n procs/
GM: gm_register_memory will be able to lock XXX pages (YYY MBytes)
Is there a way to tell GM to pull more memory from the system?
GM reserves all IOMMU space that the OS is willing to give it, so what is
needed is a way to tell the OS and/or machine to allow a bigger chunk of
IOMMU space to b
On Dec 11, 2006, at 2:20 PM, Reese Faucette wrote:
I have tried moving around machines that the run is done on to the
same result in multiple places.
The error is:
[aon049.engin.umich.edu:21866] [mpool_gm_module.c:100] error(8)
registering gm memory
This is on a PPC-based OSX system? How man
On Mon, Dec 11, 2006 at 02:52:40PM -0500, Brock Palen wrote:
> On Dec 11, 2006, at 2:45 PM, Reese Faucette wrote:
>
> >> Also I have no idea what the memory window question is, i will
> >> look it up on google.
> >>
> >> aon075:~ root# dmesg | grep GM
> >> GM: gm_register_memory will be able to lo
On Dec 11, 2006, at 2:45 PM, Reese Faucette wrote:
Also I have no idea what the memory window question is, i will
look it up on google.
aon075:~ root# dmesg | grep GM
GM: gm_register_memory will be able to lock 96000 pages (375 MBytes)
This just answered it - there is 375MB available for GM t
Also I have no idea what the memory window question is, i will
look it up on google.
aon075:~ root# dmesg | grep GM
GM: gm_register_memory will be able to lock 96000 pages (375 MBytes)
This just answered it - there is 375MB available for GM to register, which
is the IOMMU window size available
On Dec 11, 2006, at 2:34 PM, Brock Palen wrote:
Yes it is a PPC based system. The machines are duel G5 with 1 gig of
ram. I am only running 1 thread per cpu, (not over allocating). It
is not a maximized run, when running i see 500MB free on the nodes.
Each thread uses ~110MB.
Sorry to edit m
Yes it is a PPC based system. The machines are duel G5 with 1 gig of
ram. I am only running 1 thread per cpu, (not over allocating). It
is not a maximized run, when running i see 500MB free on the nodes.
Each thread uses ~110MB.
I could not answer wether or not OSX and PPC 970FX have a
I have tried moving around machines that the run is done on to the
same result in multiple places.
The error is:
[aon049.engin.umich.edu:21866] [mpool_gm_module.c:100] error(8)
registering gm memory
This is on a PPC-based OSX system? How many MPI processes per node are you
starting? And I as
Hello,
The patch from myricom fixed the issue of RDMA on OSX thankyou very
much.
I am now getting another error :-)
I have tried moving around machines that the run is done on to the
same result in multiple places.
The error is:
[aon049.engin.umich.edu:21866] [mpool_gm_module.c:100] erro
Hello all,
I am a user of R and have been having trouble with LAM/MPI. I am curious as
to whether OpenMPI would be a good option for me to try instead. However, I
am unsure as to whether OpenMPI and R can be used together. Can someone
tell me whether OpenMPI can be used with R or not? If so, c
14 matches
Mail list logo