This sounds like what I need exactly. But when I tried it, I got the
following problem - I'm working on my Desktop, and trying to preload
the executable to my laptop.

--------------------------------------------------------------------------
gordon@gordon-desktop:~/Desktop/openmpi-1.3.3/examples$ mpirun
-machinefile machine.linux -np 2 --preload-binary $(pwd)/hello_c.out
gordon@gordon-desktop's password:
--------------------------------------------------------------------------
WARNING: Remote peer ([[18118,0],1]) failed to preload a file.

Exit Status: 256
Local  File: /tmp/openmpi-sessions-gordon@gordon-laptop_0/18118/0/hello_c.out
Remote File: /home/gordon/Desktop/openmpi-1.3.3/examples/hello_c.out
Command:
  scp  gordon-desktop:/home/gordon/Desktop/openmpi-1.3.3/examples/hello_c.out
/tmp/openmpi-sessions-gordon@gordon-laptop_0/18118/0/hello_c.out

Will continue attempting to launch the process(es).
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun was unable to launch the specified application as it could not access
or execute an executable:

Executable: /home/gordon/Desktop/openmpi-1.3.3/examples/hello_c.out
Node: node1

while attempting to start process rank 1.
--------------------------------------------------------------------------


I typed in my password for master node account when asked for it. But
why I was asked for my password on master node - I am working under
this account anyway?

--qing


On Fri, Nov 6, 2009 at 7:09 AM, Josh Hursey <jjhur...@open-mpi.org> wrote:
>
> As an alternative technique for distributing the binary, you could ask Open 
> MPI's runtime to do it for you (made available in the v1.3 series). You still 
> need to make sure that the same version of Open is installed on all nodes, 
> but if you pass the --preload-binary option to mpirun the runtime environment 
> will distribute the binary across the machine (staging it to a temporary 
> directory) before launching it.
>
> You can do the same with any arbitrary set of files or directories (comma 
> separated) using the --preload-files option as well.
>
> If you type 'mpirun --help' the options that you are looking for are:
> --------------------
>   --preload-files <arg0>
>   --preload-files-dest-dir <arg0>
>                         with --preload-files. By default the absolute and
>                         relative paths provided by --preload-files are
> -s|--preload-binary      Preload the binary on the remote machine before
>
> --------------------
>
> -- Josh
>
> On Nov 5, 2009, at 6:56 PM, Terry Frankcombe wrote:
>
>> For small ad hoc COWs I'd vote for sshfs too.  It may well be as slow as
>> a dog, but it actually has some security, unlike NFS, and is a doddle to
>> make work with no superuser access on the server, unlike NFS.
>>
>>
>> On Thu, 2009-11-05 at 17:53 -0500, Jeff Squyres wrote:
>>>
>>> On Nov 5, 2009, at 5:34 PM, Douglas Guptill wrote:
>>>
>>>> I am currently using sshfs to mount both OpenMPI and my application on
>>>> the "other" computers/nodes.  The advantage to this is that I have
>>>> only one copy of OpenMPI and my application.  There may be a
>>>> performance penalty, but I haven't seen it yet.
>>>>
>>>
>>>
>>> For a small number of nodes (where small <=32 or sometimes even <=64),
>>> I find that simple NFS works just fine.  If your apps aren't IO
>>> intensive, that can greatly simplify installation and deployment of
>>> both Open MPI and your MPI applications IMNSHO.
>>>
>>> But -- every app is different.  :-)  YMMV.
>>>
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to