[gmx-users] scaling of replica exchange

2011-02-22 Thread Valeria Losasso


Dear all,
I am making some tests to start using replica exchange molecular dynamics 
on my system in water. The setup is ok (i.e. one replica alone runs 
correctly), but I am not able to parallelize the REMD. Details follow:


- the test is on 8 temperatures, so 8 replicas
- Gromacs version 4.5.3
- One replica alone, in 30 minutes with 256 processors, makes 52500 steps. 
8 replicas with 256x8 = 2048 processors, make 300 (!!) steps each = 2400 
in total (I arrived to these numbers just to see some update of the log 
file: since I am running on a big cluster, I can not use more than half an 
hour for tests with less than 512 processors)

- I am using mpirun with options -np 256 -s  md_.tpr -multi 8 -replex 1000

Do you have any idea?
Thanks in advance

Valeria




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] scaling of replica exchange

2011-02-23 Thread Valeria Losasso


Sorry Luca, my mistake in writing. I used actually 2048.
Valeria



On Wed, 23 Feb 2011, Luca wrote:


Hi Valeria,

Dear all,
I am making some tests to start using replica exchange molecular dynamics
on my system in water. The setup is ok (i.e. one replica alone runs
correctly), but I am not able to parallelize the REMD. Details follow:

- the test is on 8 temperatures, so 8 replicas
- Gromacs version 4.5.3
- One replica alone, in 30 minutes with 256 processors, makes 52500 steps.
8 replicas with 256x8 = 2048 processors, make 300 (!!) steps each = 2400
in total (I arrived to these numbers just to see some update of the log
file: since I am running on a big cluster, I can not use more than half an
hour for tests with less than 512 processors)
- I am using mpirun with options -np 256 -s  md_.tpr -multi 8 -replex 1000

I think that with this option you are  using 256/8=32 cpu for each replica.
If you want use 256 for each replica you cna try set up -np option
equal to 256x8 = 2048.

Luca


Do you have any idea?
Thanks in advance

Valeria



--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] scaling of replica exchange

2011-02-23 Thread Valeria Losasso


Thank you Mark. I found one message of this month concerning this topic, 
and there are some small suggestions. I don't think that such a changes 
can restore a factor of 26, but it could be worth to try to see what 
happens. I will let you know.


Valeria



On Wed, 23 Feb 2011, Mark Abraham wrote:




On 02/23/11, Valeria Losasso  wrote:

  Dear all,
  I am making some tests to start using replica exchange molecular dynamics 
on my system in water. The setup is ok
  (i.e. one replica alone runs correctly), but I am not able to parallelize 
the REMD. Details follow:

  - the test is on 8 temperatures, so 8 replicas
  - Gromacs version 4.5.3
  - One replica alone, in 30 minutes with 256 processors, makes 52500 
steps. 8 replicas with 256x8 = 2048
  processors, make 300 (!!) steps each = 2400 in total (I arrived to these 
numbers just to see some update of the
  log file: since I am running on a big cluster, I can not use more than 
half an hour for tests with less than 512
  processors)
  - I am using mpirun with options -np 256 -s  md_.tpr -multi 8 -replex 1000


There have been two threads on this topic in the last month or so, please check 
the archives. The implementation of
multi-simulations scales poorly. The scaling of replica-exchange itself is not 
great either. I have a working version under
final development that scales much better. Watch this space.

Mark
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] g_cluster: optimal cutoff

2010-10-31 Thread Valeria Losasso
Dear all,
for my cluster analysis I am using the g_cluster tool with the gromos method.
The problem is that I have to compare the results for system of different 
lengths, and of course the result of the cluster analysis changes according to 
the cutoff chosen. So what will be a great choice in this case?
I was thinking about different possibilities, namely: i) choosing - as it is 
quite frequent in the literature - an arbitrary cutoff (like the default 0.1), 
but using the same for different systems would be probably not suitable...
ii) looking for every case at the RMSD distribution and choosing the minimum 
value between the two peaks - in this case the cutoff would vary for every 
system; iii) choosing for every system the cutoff that allows to have in the 
largest cluster the 50% of the structures, and also in this case the cutoff 
would be different for the different cases...

Any hint?
Thanks a lot,
Valeria



Valeria Losasso
v.losa...@grs-sim.de

German Research School for
Simulation Sciences GmbH
52425 Jülich | Germany

Tel +49 2461 61 8934
Web www.grs-sim.de

Members: Forschungszentrum Jülich GmbH | RWTH Aachen University
Registered in the commercial register of the local court of
Düren (Amtsgericht Düren) under registration number HRB 5268
Registered office: Jülich
Executive board: Prof. Marek Behr Ph.D. | Dr. Norbert Drewes








-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] g_cluster: optimal cutoff

2010-10-31 Thread Valeria Losasso
Dear all,
for my cluster analysis I am using the g_cluster tool with the gromos method.
The problem is that I have to compare the results for system of different 
lengths, and of course the result of the cluster analysis changes according to 
the cutoff chosen. So what will be a great choice in this case?
I was thinking about different possibilities, namely: i) choosing - as it is 
quite frequent in the literature - an arbitrary cutoff (like the default 0.1), 
but using the same for different systems would be probably not suitable...
ii) looking for every case at the RMSD distribution and choosing the minimum 
value between the two peaks - in this case the cutoff would vary for every 
system; iii) choosing for every system the cutoff that allows to have in the 
largest cluster the 50% of the structures, and also in this case the cutoff 
would be different for the different cases...

Any hint?
Thanks a lot,
Valeria



-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists