Keep in mind the difference between the MPI standard and implementations of that standard. To be specific: Open MPI is one implementation of the MPI standard (see www.mpi-forum.org for the standard document, v2.1 is the latest version of that doc).

Open MPI generally depends on finding your executable on the node where it is running. This usually means specifying an absolute filename than can be exec'ed directly or a relative filename that can be found either with relation to the cwd or in the PATH.

How this executable gets to the node to execute is a different question. The executable may be located in a local filesystem or a networked filesystem (e.g., NFS). Many cluster-based users of MPI with small-ish systems (~32-64 nodes) use an NFS server to make their executables visible on all nodes without the bother of manually copying executables between nodes.

That being said, there are many scheduling/resource managing systems out there that will pre-stage executables (and other data files) to desired nodes before MPI tries to run them. These are beyond (Open) MPI's scope -- from OMPI's perspective, we just find the executable in the PATH; it doesn't really matter to OMPI how it got there.


On May 26, 2009, at 2:11 AM, Charles Salvia wrote:

I am very new to the concept of MPI, and have only recently begun researching it. I have a very basic question about the way MPI works.

How exactly does MPI distribute user-created applications (binary code) over a network? Does it actually copy the binary into the local memory of each node, and execute it? If so, doesn't this put serious restrictions on the heterogeneity of the network? For example, in order to run a distributed application (compiled with gcc) over a typical Linux cluster, you'd need to make sure that each node has the same version of glibc, or there could be issues running the binary.

Any information would be greatly appreciated.
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
Cisco Systems

Reply via email to