Hello,

we have built a cluster with nodes that are comprised by the following: dual 
core Intel(R) Xeon(R) CPU E3110 @ 3.00GHz. The memory of each node has 16Gb of 
memory. The switch that we use is a dell power connect model. Each node has a 
Gigabyte ethernet card.

I tested the performance for a system of 7200 atoms in 4cores of one node, in 8 
cores of one node and in 16 cores of two nodes. In one node the performance is 
getting better.
The problem I get is that moving from one node to two, the performance 
decreases dramatically (almost two days for a run that finishes in less than 3 
hours!).

I have compiled gromacs with --enable-mpi option. I also have read previous 
archives from Mr Kurtzner, yet from what I saw is that they are focused on 
errors in gromacs 4 or on problems that previous versions of gromacs had. I get 
no errors, just low performance.

Is there any option that I must enable in order to succeed better performance 
in more than one nodes?  Or do you think according to your experience that the 
switch we use might be the problem? Or maybe should we have to activate 
anything from the nodes?

Thank you in advance,
Nikos




      
_______________________________________________
gmx-users mailing list    gmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Reply via email to