Hi Ryan

The message "Lamnodes Failed!" seems to indicate that you still have a
LAM/MPI installation somewhere.
You should get rid of that first.

Jody

On Tue, Aug 12, 2008 at 9:00 AM, Rayne <lancer6...@yahoo.com> wrote:
> Hi, thanks for your reply.
>
> I did what you said, set up the password-less ssh, nfs etc, and put the IP 
> address of the server in the default hostfile (in my PC only, the default 
> hostfile in the server does not contain any IP addresses). Then I installed 
> Open MPI in the server under the same directory as my PC, e.g. 
> /usr/lib/openmpi/1.2.5-gcc/
> All my MPI programs and executables, e.g. a.out are in the shared folder. 
> However, I have trouble running the MPI programs.
>
> After compiling my MPI program on my PC, I tried to run it via "mpiexec -n 2 
> ./a.out". However, I get the error message
>
> "Failed to find or execute the following executable:
> Host: (the name of the server)
> Executable: ./a.out
>
> Cannot continue"
>
> Then when I tried to run the MPI program on my server after compiling, I get 
> the error:
>
> "Lamnodes Failed!
> Check if you had booted lam before calling mpiexec else use -machinefile to 
> pass host file to mpiexec"
>
> I'm guessing that because the server cannot run the MPI program, I can't run 
> the program on my PC as well. Is there some other configurations I missed 
> when using Open MPI on my server?
>
> Thank you.
>
> Regards,
> Rayne
>
> --- On Tue, 12/8/08, Joshua Bernstein <jbernst...@penguincomputing.com> wrote:
>
>> From: Joshua Bernstein <jbernst...@penguincomputing.com>
>> Subject: Re: [OMPI users] Setting up Open MPI to run on multiple servers
>> To: lancer6...@yahoo.com, "Open MPI Users" <us...@open-mpi.org>
>> Date: Tuesday, 12 August, 2008, 8:34 AM
>> Rayne wrote:
>> > Hi all,
>> > I am trying to set up Open MPI to run on multiple
>> servers, but as I
>> > have very little experience in networking, I'm
>> getting confused by the
>> > info on open-mpi.org, with the .rhosts, rsh, ssh etc.
>> >
>> > Basically what I have now is a PC with Open MPI
>> installed. I want to
>> > connect it to, say, 10 servers, so I can run MPI
>> programs on all 11
>> > nodes. From what I've read, I think I need to
>> install Open MPI on the
>> > 10 servers too, and there must be a shared directory
>> where I keep all
>> > the MPI programs I've written, so all nodes can
>> access them.
>> >
>> > Then I need to create a machine file on my local PC (I
>> found a default
>> > hostfile "openmpi-default-hostfile" in
>> {prefix}/etc/. Can I use that
>> > instead so I need not have "-machinefile
>> machine" with every mpiexec?)
>> > with the list of the 10 servers. I'm assuming I
>> need to put down the
>> > IP addresses of the 10 servers in this file. I've
>> also read that the
>> > 10 servers also need to each have a .rhosts file that
>> tells them the
>> > machine (i.e. my local PC) and user from which the
>> programs may be
>> > launched from. Is this right?
>> >
>> > There is also the rsh/ssh configuration, which I find
>> the most
>> > confusing. How do I know whether I'm using rsh or
>> ssh? Is following
>> > the instructions on
>> http://www.open-mpi.org/faq/?category=rsh under
>> > "3: How can I make ssh not ask me for a
>> password?" sufficient? Does
>> > this mean that when I'm using the 10 servers to
>> run the MPI program,
>> > I'm login to them via ssh? Is this necessary in
>> every case?
>> >
>> > Is doing all of the above all it takes to run MPI
>> programs on all 11
>> > nodes, or is there something else I missed?
>>
>> More or less. Though the first step is to setup
>> password-less SSH
>> between all 11 machines. I'd completely skip the use of
>> RSH as its very
>> insecure and shouldn't be used in non-dedicated
>> cluster, and even
>> then... You should basically setup SSH so a user can SSH
>> from one node
>> to another without specify a password or entering in any
>> other information.
>>
>> Then, the next is to setup NFS. NFS provides you with a way
>> to share a
>> directory on one computer, to many other computers avoiding
>> the hassel
>> of having to copy all your MPI programs to all of the
>> nodes. This is
>> generally as easy as configuring /etc/exports, and then
>> just mounting
>> the directory on the other computers. Be Sure you mount the
>> directories
>> in the same place on every node though.
>>
>> Lastly, give your MPI programs a shot. While you don't
>> need to have a
>> hostlist, because you can specify the hostname (or IPs). on
>> the mpirun
>> command line. But you your case its likely a good idea.
>>
>> Hope that gets you started...
>>
>> -Joshua Bernstein
>> Software Engineer
>> Penguin Computing
>
>
>      New Email names for you!
> Get the Email name you've always wanted on the new @ymail and @rocketmail.
> Hurry before someone else does!
> http://mail.promotions.yahoo.com/newdomains/sg/
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to