did you saw that, maybe, just maybe using:
xserve01.local slots=8 max-slots=8
xserve02.local slots=8 max-slots=8
xserve03.local slots=8 max-slots=8
xserve04.local slots=8 max-slots=8
it can set the number of process specifically for each node, the
"slots" does this setting the configuration of
Hi Robert,
I ran some very crude tests and found that things slowed down once you
got over 8 cores at a time. However, they didn't slow down by 50% if
you went to 16 processes. Sadly, the tests were so crude, I did not
keep good notes (it appears).
I'm running a gcm, so my benchmarks ma
If anyone else is using xgrid, there is a mechanism to limit the
processes per machine:
sudo defaults write /Library/Preferences/com.apple.xgrid.agent
MaximumTaskCount 8
on each of the nodes and then restarting xgrid tells the controller to
only send 8 processes to that node. For now tha
The Open MPI FAQ recommends not to oversubscribe the available cores
for best performances, but is this still true? The new Nehalem
processors are built to run 2 threads on each core. On a 8 sockets
systems, that sums up to 128 threads that Intel claims can be run
without significant perfor
Looking at the code, you are correct in that the Xgrid launcher is
ignoring hostfiles. I'll have to look at it to determine how to
correct that situation - I didn't write that code, nor do I have a way
to test any changes I might make to it.
For now, though, if you add --bynode to your comm
Hi Vitorio,
Thanks for getting back to me! My hostfile is
xserve01.local max-slots=8
xserve02.local max-slots=8
xserve03.local max-slots=8
xserve04.local max-slots=8
I've now checked, and this seems to work fine just using ssh. i.e. if
I turn off the Xgrid queue manager I can submit jobs ma
Hi,
So you have 4 nodes each one with 2 processors, each processor 4-core
- quad-core.
So you have capacity for 32 process in parallel.
I think that only using the hostfile is enough is how I use. If you to
specify a specific host or a different sequence, the mpirun will obey
the host sequ
I'm afraid I don't know anything about petsc4py, so I can't speak to
it. However, I can say there is nothing in OMPI that would limit the
number of connections on a machine.
There are, of course, system limits on that value. Have you checked
that ulimit isn't set to something absurdly low?
Hi all,
Sorry in advance if these are naive questions - I'm not experienced in
running a grid...
I'm using openMPI on 4 duo Quad-core Xeon xserves. The 8 cores mimic
16 cores and show up in xgrid as each agent having 16 processors.
However, the processing speed goes down as the used pr
Thanks for your answer, below. Just so my other question does not
get lost, I will post it again.
I cannot get an 8-proc job to run on an 8-core cluster with openmpi
and petsc. I loaded mpi4py and petsc4py, and then
I try to run the python script:
from mpi4py import MPI
from petsc4py import PE
In the 1.3 series and beyond, you have to specifically tell us the
name of any hostfile, including the default one for your system. So,
in this example, you would want to set:
OMPI_MCA_orte_default_hostfile=absolute-path-to-openmpi-default-hostfile
in your environment, or just add:
-mca def
The original problem was that I could not get an 8-proc job to
run on an 8-core cluster. I loaded mpi4py and petsc4py, and then
I try to run the python script:
from mpi4py import MPI
from petsc4py import PETSc
using
mpirun -n 8 -x PYTHONPATH python test-mpi4py.py
This hangs on my 8-core FC11
Thanks for the bug report! I'm hoping that a ROMIO refresh in an
upcoming Open MPI version will fix this error. I've added a link to
your post in https://svn.open-mpi.org/trac/ompi/ticket/1888.
On Jul 9, 2009, at 6:17 AM, > wrote:
Hello,
Some weeks ago, I reported a problem using MPI I
Thanks for the bug report!
I've filed https://svn.open-mpi.org/trac/ompi/ticket/1974 about this.
On Jul 7, 2009, at 1:04 PM, Jumper, John wrote:
I am attempting to use coll_tuned_dynamic_rules_filename to tune Open
MPI 1.3.2. Based on my testing, it appears that the dynamic rules
file
*on
14 matches
Mail list logo