Andrei Neamtu wrote:
Dear Mark,
Thank you for your reply!
I thought that maybe the accessibility were the problem. I don't figure
how to set the working directory of say node 1 to my current working
directory on node 0.
I setup node 1 to be able to access node 0 through ssh with no password
but the problem remains. So .. I'm stuck here.
Any idea where to start digging solving the most probable accessibility
problem?
Like I said, you need to be able to see the run input file on a file
system accessible to the other nodes.
In short there is how I made the GROMACS installation:
1. I installed GROMACS (serial and parallel version) on every node
2. I setup the node 0 to be able to access all other nodes through ssh
without password
3. generated on node 0 the nodesfile file for lam booting
I must say that parallel simulations run fine on the entire cluster (I
mean a single system split on all nodes). The only problem appears when
I try using -multi to simulate several copies of the system along the
nodes.
This is consistent if in a non-multi parallel program, the file I/O is
done only on the head node and the necessary information is propagated
across MPI. I'm not sure if this is the case in GROMACS, however.
Mark
_______________________________________________
gmx-users mailing list gmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php