I have built and ran gromacs test programs with ompi a few times in
the last month. It works great. To keep track of nodes we use torque
http://www.clusterresources.com/pages/products/torque-resource-
manager.php
With torque and OMPI built against it, you have no need to run orted....
Just mpirun mdrun mpirun will get the hostnames and number of cpus
from torque. But thats heavy weight. We use that with a cluster of
300+ nodes. I would for a small system use the
mpirun -machinefile This works, and you dont need to start up orted
when you use it. But you will still have no way of monitoring free
nodes. (unlike torque) that i know of.
Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985
On Jul 3, 2006, at 10:35 AM, Jack Howarth wrote:
I have created simple fink (http://fink.sourceforge.net) packaging
for open-mpi v1.1 on MacOS X. The packaging builds open-mpi with its
default settings in configure and appears to pass all of its make
check
without problems. However, the lack of clear documentation for open-
mpi
still is a problem. I seem able to manually run the test programs from
the open-mpi distribution using...
mdrun -np 2 ...
after starting the orted daemon with....
orted --seed --persistent --scope public
I can see both cpus spike when I do the mdrun's so I think
that works. However, I can't figure how the proper way to
monitor the status of the available nodes. Specifically,
what is the equivalent to the lamnodes program in open-mpi?
Also, is there a simple test program that runs for a significant
period of time that I can use to test the different options to
monitor and control the open-mpi jobs that are running under
orted? Thanks in advance for any clarifications.
Jack
ps I assume that at v1.1, open-mpi is considered to be a usable
replacement for lam? Certainly, gromacs 3.3.1 seems to compile
its mpi support against open-mpi.
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users