Setting up a NFS shall be handy.
On 6/13/2007 5:13 PM, Andrei Neamtu wrote:
Dear Mark,
Thank you for your reply!
I thought that maybe the accessibility were the problem. I don't
figure how to set the working directory of say node 1 to my current
working directory on node 0.
I setup node 1 to be able to access node 0 through ssh with no
password but the problem remains. So .. I'm stuck here.
Any idea where to start digging solving the most probable
accessibility problem?
In short there is how I made the GROMACS installation:
1. I installed GROMACS (serial and parallel version) on every node
2. I setup the node 0 to be able to access all other nodes through ssh
without password
3. generated on node 0 the nodesfile file for lam booting
I must say that parallel simulations run fine on the entire cluster (I
mean a single system split on all nodes). The only problem appears
when I try using -multi to simulate several copies of the system along
the nodes.
Do I have to use a file sharing system or something like this ...?
It is the first time I try to use the -multi option.
Thank you a lot,
Andrei
_______________________________________________
gmx-users mailing list gmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before
posting!
Please don't post (un)subscribe requests to the list. Use thewww
interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
_______________________________________________
gmx-users mailing list gmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php