As an alternative technique for distributing the binary, you could ask
Open MPI's runtime to do it for you (made available in the v1.3
series). You still need to make sure that the same version of Open is
installed on all nodes, but if you pass the --preload-binary option to
mpirun the runtime environment will distribute the binary across the
machine (staging it to a temporary directory) before launching it.
You can do the same with any arbitrary set of files or directories
(comma separated) using the --preload-files option as well.
If you type 'mpirun --help' the options that you are looking for are:
--------------------
--preload-files <arg0>
--preload-files-dest-dir <arg0>
with --preload-files. By default the
absolute and
relative paths provided by --preload-files are
-s|--preload-binary Preload the binary on the remote machine before
--------------------
-- Josh
On Nov 5, 2009, at 6:56 PM, Terry Frankcombe wrote:
For small ad hoc COWs I'd vote for sshfs too. It may well be as
slow as
a dog, but it actually has some security, unlike NFS, and is a
doddle to
make work with no superuser access on the server, unlike NFS.
On Thu, 2009-11-05 at 17:53 -0500, Jeff Squyres wrote:
On Nov 5, 2009, at 5:34 PM, Douglas Guptill wrote:
I am currently using sshfs to mount both OpenMPI and my
application on
the "other" computers/nodes. The advantage to this is that I have
only one copy of OpenMPI and my application. There may be a
performance penalty, but I haven't seen it yet.
For a small number of nodes (where small <=32 or sometimes even
<=64),
I find that simple NFS works just fine. If your apps aren't IO
intensive, that can greatly simplify installation and deployment of
both Open MPI and your MPI applications IMNSHO.
But -- every app is different. :-) YMMV.
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users