[gmx-users] rdf range

2011-01-29 Thread Thomas Koller
Hello!

One question:

I want to plot the rdf functions. How can I adjust to fix how far the rdf is 
plotted over the distance? I want to see the rdfs over long distance r. Where 
can I adjust that?

Thanks!
Thomas
-- 
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Simulation time losses with REMD

2011-01-29 Thread Mark Abraham

On 28/01/2011 4:46 PM, Mark Abraham wrote:

Hi,

I compared the .log file time accounting for same .tpr file run alone 
in serial or as part of an REMD simulation (with each replica on a 
single proessor). It ran about 5-10% slower in the latter. The effect 
was a bit larger when comparing the same .tpr on 8 processors with 
REMD with 8 processers per replica. The effect seems fairly 
independent of whether I compare the lowest or highest replica.


OK I found the issue by binary-searching the code looking for the 
offending line. It's in compute_globals() in src/kernel/md.c. The call 
to gmx_sum_sim consumes all the extra time. This code is taking care of 
synchronization for possibly doing checkpointing.


if (MULTISIM(cr) && bInterSimGS)
{
if (MASTER(cr))
{
/* Communicate the signals between the 
simulations */

gmx_sum_sim(eglsNR,gs_buf,cr->ms);
}
/* Communicate the signals form the master to the 
others */

gmx_bcast(eglsNR*sizeof(gs_buf[0]),gs_buf,cr);
}

This eventually calls

void gmx_sumf_comm(int nr,float r[],MPI_Comm mpi_comm)
{
#if defined(MPI_IN_PLACE_EXISTS) || defined(GMX_THREADS)
MPI_Allreduce(MPI_IN_PLACE,r,nr,MPI_FLOAT,MPI_SUM,mpi_comm);
#else
/* this function is only used in code that is not performance 
critical,

   (during setup, when comm_rec is not the appropriate communication
   structure), so this isn't as bad as it looks. */
float *buf;
int i;

snew(buf, nr);
MPI_Allreduce(r,buf,nr,MPI_FLOAT,MPI_SUM,mpi_comm);
for(i=0; iClearly the comment is out of date. My nstlist=5, repl_ex_nst=2500 and 
nstcalcenergy=-1, so that triggers gs.nstms=5 and so bInterSimGS is TRUE 
every 5 steps. I'm not sure whether the problem is with nstlist, or the 
multi-simulation checkpointing engineering, or what.


Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Missing Amino Acids

2011-01-29 Thread simon sham
Hi,
1. I have a protein which is missing both the first and last amino acids in the 
sequence. Do you know any free linux softwares that can insert these missing 
amino acids?
2. Since only two amino acids are missing, one in both end, can I simply ignore 
them in simulations?

Thanks for your insight in advance.

Best,

Simon



  -- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Missing Amino Acids

2011-01-29 Thread Justin A. Lemkul



simon sham wrote:

Hi,
1. I have a protein which is missing both the first and last amino acids 
in the sequence. Do you know any free linux softwares that can insert 
these missing amino acids?


A number of possibilities are listed here:

http://www.gromacs.org/Documentation/File_Formats/Coordinate_File#Sources

2. Since only two amino acids are missing, one in both end, can I simply 
ignore them in simulations?




If they're not functionally relevant or important to your aims, then probably. 
But no one on this list can answer that for you.


-Justin


Thanks for your insight in advance.

Best,

Simon




--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun with append option

2011-01-29 Thread Sai Pooja
I would be happy to supply more information.. if someone could please look
into this.. otherwise I will have to switch to storing every file and then
just concatenating them which seems like a rather roundabout way of doing
it.

On Fri, Jan 28, 2011 at 4:37 PM, Sai Pooja  wrote:

> This is the command:
>
> nbs submit -command "(/usr/local/gromacs/4.5.1/bin/mdrun_mpi -s rex_3.tpr
> -e rex_3 -c after_rex_3 -cpi restart3 -cpo restart3 -ap
> pend -g rexlog3 -x rextraj3);" -nproc 1 -name "GENHAM-DIHEDRAL-3" -mail
> start end
>
> Pooja
>
>   On Fri, Jan 28, 2011 at 4:20 PM, Mark Abraham 
> wrote:
>
>>  On 29/01/2011 3:56 AM, Sai Pooja wrote:
>>
>>> Hi,
>>> I am using tpbconv and mdrun to extend a simulation. I use it with the
>>> append option but the files still get overwritten or erased. Can someone
>>> help me in this regard?
>>> Pooja
>>> Commands (in python)
>>> cmd = '(%s/tpbconv -extend %f -s rex_%d.tpr -o rex_%d.tpr)'
>>> %(GROMPATH,dtstep,i,i)
>>>  os.system(cmd)
>>>  time.sleep(1)
>>>  cmd  = 'nbs submit -command "'
>>>  cmd += '(%s/mdrun_mpi -noh -noversion -s rex_%d.tpr -e rex_%d -c
>>> after_rex_%d -cpi restart%d -cpo restart%d -append -g rexlog%d -x rextraj%d
>>> >/dev/null); ' %(GROMPATH,i,i,i,i,i,i,i)
>>>  cmd += '" '
>>>  cmd += '-nproc 1 '
>>>  cmd += '-name "GENHAM-DIHEDRAL-%d" '%(i)
>>>  cmd += '-mail start end '
>>>  cmd += '-elapsed_limit 16h >> rexid'
>>>  os.system(cmd)
>>>
>>
>> More useful for diagnostic and record-preservation purposes is to
>> construct the cmd string and print it to stdout (or something).
>>
>> At the moment it is far from clear that your -cpi file exists for the new
>> run.
>>
>> Mark
>> --
>> gmx-users mailing listgmx-users@gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> Please don't post (un)subscribe requests to the list. Use the www
>> interface or send it to gmx-users-requ...@gromacs.org.
>> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>
>
>
>  --
> Quaerendo Invenietis-Seek and you shall discover.
>



-- 
Quaerendo Invenietis-Seek and you shall discover.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mdrun with append option

2011-01-29 Thread Justin A. Lemkul



Sai Pooja wrote:
I would be happy to supply more information.. if someone could please 
look into this.. otherwise I will have to switch to storing every file 
and then just concatenating them which seems like a rather roundabout 
way of doing it.




In theory, there's nothing wrong with what you've done.  What version of Gromacs 
is this?  Are there any error messages written to the .log file or stdout?  In 
general, when something goes wrong, Gromacs is fairly vocal about it.


-Justin

On Fri, Jan 28, 2011 at 4:37 PM, Sai Pooja > wrote:


This is the command:
 
nbs submit -command "(/usr/local/gromacs/4.5.1/bin/mdrun_mpi -s

rex_3.tpr -e rex_3 -c after_rex_3 -cpi restart3 -cpo restart3 -ap
pend -g rexlog3 -x rextraj3);" -nproc 1 -name "GENHAM-DIHEDRAL-3"
-mail start end
 
Pooja


On Fri, Jan 28, 2011 at 4:20 PM, Mark Abraham
mailto:mark.abra...@anu.edu.au>> wrote:

On 29/01/2011 3:56 AM, Sai Pooja wrote:

Hi,
I am using tpbconv and mdrun to extend a simulation. I use
it with the append option but the files still get
overwritten or erased. Can someone help me in this regard?
Pooja
Commands (in python)
cmd = '(%s/tpbconv -extend %f -s rex_%d.tpr -o rex_%d.tpr)'
%(GROMPATH,dtstep,i,i)
 os.system(cmd)
 time.sleep(1)
 cmd  = 'nbs submit -command "'
 cmd += '(%s/mdrun_mpi -noh -noversion -s rex_%d.tpr -e
rex_%d -c after_rex_%d -cpi restart%d -cpo restart%d -append
-g rexlog%d -x rextraj%d >/dev/null); '
%(GROMPATH,i,i,i,i,i,i,i)
 cmd += '" '
 cmd += '-nproc 1 '
 cmd += '-name "GENHAM-DIHEDRAL-%d" '%(i)
 cmd += '-mail start end '
 cmd += '-elapsed_limit 16h >> rexid'
 os.system(cmd)


More useful for diagnostic and record-preservation purposes is
to construct the cmd string and print it to stdout (or something).

At the moment it is far from clear that your -cpi file exists
for the new run.

Mark
-- 
gmx-users mailing listgmx-users@gromacs.org


http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org
.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
Quaerendo Invenietis-Seek and you shall discover.





--
Quaerendo Invenietis-Seek and you shall discover.



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun with append option

2011-01-29 Thread Justin A. Lemkul


Disregard the version question; I see it now in the command string.  You should 
still check for errors, and consider using version 4.5.3 to see if the issue 
persists.


-Justin

Justin A. Lemkul wrote:



Sai Pooja wrote:
I would be happy to supply more information.. if someone could please 
look into this.. otherwise I will have to switch to storing every file 
and then just concatenating them which seems like a rather roundabout 
way of doing it.




In theory, there's nothing wrong with what you've done.  What version of 
Gromacs is this?  Are there any error messages written to the .log file 
or stdout?  In general, when something goes wrong, Gromacs is fairly 
vocal about it.


-Justin

On Fri, Jan 28, 2011 at 4:37 PM, Sai Pooja > wrote:


This is the command:
 nbs submit -command "(/usr/local/gromacs/4.5.1/bin/mdrun_mpi -s
rex_3.tpr -e rex_3 -c after_rex_3 -cpi restart3 -cpo restart3 -ap
pend -g rexlog3 -x rextraj3);" -nproc 1 -name "GENHAM-DIHEDRAL-3"
-mail start end
 Pooja

On Fri, Jan 28, 2011 at 4:20 PM, Mark Abraham
mailto:mark.abra...@anu.edu.au>> wrote:

On 29/01/2011 3:56 AM, Sai Pooja wrote:

Hi,
I am using tpbconv and mdrun to extend a simulation. I use
it with the append option but the files still get
overwritten or erased. Can someone help me in this regard?
Pooja
Commands (in python)
cmd = '(%s/tpbconv -extend %f -s rex_%d.tpr -o rex_%d.tpr)'
%(GROMPATH,dtstep,i,i)
 os.system(cmd)
 time.sleep(1)
 cmd  = 'nbs submit -command "'
 cmd += '(%s/mdrun_mpi -noh -noversion -s rex_%d.tpr -e
rex_%d -c after_rex_%d -cpi restart%d -cpo restart%d -append
-g rexlog%d -x rextraj%d >/dev/null); '
%(GROMPATH,i,i,i,i,i,i,i)
 cmd += '" '
 cmd += '-nproc 1 '
 cmd += '-name "GENHAM-DIHEDRAL-%d" '%(i)
 cmd += '-mail start end '
 cmd += '-elapsed_limit 16h >> rexid'
 os.system(cmd)


More useful for diagnostic and record-preservation purposes is
to construct the cmd string and print it to stdout (or 
something).


At the moment it is far from clear that your -cpi file exists
for the new run.

Mark
-- gmx-users mailing listgmx-users@gromacs.org

http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before 
posting!

Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org
.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- Quaerendo Invenietis-Seek and you shall discover.




--
Quaerendo Invenietis-Seek and you shall discover.





--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun with append option

2011-01-29 Thread Mark Abraham

On 30/01/2011 10:39 AM, Sai Pooja wrote:
I would be happy to supply more information.. if someone could please 
look into this.. otherwise I will have to switch to storing every file 
and then just concatenating them which seems like a rather roundabout 
way of doing it.


As I suggested a few emails ago, are you sure that -cpi file exists? If 
your numerical suffixes are indexing restarts, then unless you've done 
some manual copying that you haven't told us about, it won't. Your 
filename scheme seems a bit contorted - like you're trying to do the 
work that GROMACS 4.5.x will just do for you if you let it.


Otherwise, you'll have to do some detective work with gmxcheck on the 
-cpi to see what might be the issue.


In your case, an initial

mdrun -deffnm rex_3

(perhaps save some copies while you're experimenting) and subsequently

tpbconv -extend  -f rex_3 -o rex_3
mdrun -deffnm rex_3 -append

will work and be much simpler than whatever you're trying to do with 
filenames :-)


Mark



On Fri, Jan 28, 2011 at 4:37 PM, Sai Pooja > wrote:


This is the command:
nbs submit -command "(/usr/local/gromacs/4.5.1/bin/mdrun_mpi -s
rex_3.tpr -e rex_3 -c after_rex_3 -cpi restart3 -cpo restart3 -ap
pend -g rexlog3 -x rextraj3);" -nproc 1 -name "GENHAM-DIHEDRAL-3"
-mail start end
Pooja

On Fri, Jan 28, 2011 at 4:20 PM, Mark Abraham
mailto:mark.abra...@anu.edu.au>> wrote:

On 29/01/2011 3:56 AM, Sai Pooja wrote:

Hi,
I am using tpbconv and mdrun to extend a simulation. I use
it with the append option but the files still get
overwritten or erased. Can someone help me in this regard?
Pooja
Commands (in python)
cmd = '(%s/tpbconv -extend %f -s rex_%d.tpr -o
rex_%d.tpr)' %(GROMPATH,dtstep,i,i)
 os.system(cmd)
 time.sleep(1)
 cmd  = 'nbs submit -command "'
 cmd += '(%s/mdrun_mpi -noh -noversion -s rex_%d.tpr
-e rex_%d -c after_rex_%d -cpi restart%d -cpo restart%d
-append -g rexlog%d -x rextraj%d >/dev/null); '
%(GROMPATH,i,i,i,i,i,i,i)
 cmd += '" '
 cmd += '-nproc 1 '
 cmd += '-name "GENHAM-DIHEDRAL-%d" '%(i)
 cmd += '-mail start end '
 cmd += '-elapsed_limit 16h >> rexid'
 os.system(cmd)


More useful for diagnostic and record-preservation purposes is
to construct the cmd string and print it to stdout (or something).

At the moment it is far from clear that your -cpi file exists
for the new run.

Mark
-- 
gmx-users mailing list gmx-users@gromacs.org


http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before
posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org
.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
Quaerendo Invenietis-Seek and you shall discover.





--
Quaerendo Invenietis-Seek and you shall discover.


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

RE: [gmx-users] Simulation time losses with REMD

2011-01-29 Thread martyn.winn


> -Original Message-
> From: gmx-users-boun...@gromacs.org [mailto:gmx-users-
> boun...@gromacs.org] On Behalf Of Mark Abraham
> Sent: 29 January 2011 08:24
> To: Discussion list for GROMACS users
> Subject: Re: [gmx-users] Simulation time losses with REMD
> 
> On 28/01/2011 4:46 PM, Mark Abraham wrote:
> > Hi,
> >
> > I compared the .log file time accounting for same .tpr file run alone
> > in serial or as part of an REMD simulation (with each replica on a
> > single proessor). It ran about 5-10% slower in the latter. The effect
> > was a bit larger when comparing the same .tpr on 8 processors with
> > REMD with 8 processers per replica. The effect seems fairly
> > independent of whether I compare the lowest or highest replica.
> 
> OK I found the issue by binary-searching the code looking for the
> offending line. It's in compute_globals() in src/kernel/md.c. The call
> to gmx_sum_sim consumes all the extra time. This code is taking care of
> synchronization for possibly doing checkpointing.
> 
>  if (MULTISIM(cr) && bInterSimGS)
>  {
>  if (MASTER(cr))
>  {
>  /* Communicate the signals between the
> simulations */
>  gmx_sum_sim(eglsNR,gs_buf,cr->ms);
>  }
>  /* Communicate the signals form the master to the
> others */
>  gmx_bcast(eglsNR*sizeof(gs_buf[0]),gs_buf,cr);
>  }
> 
> This eventually calls
> 
> void gmx_sumf_comm(int nr,float r[],MPI_Comm mpi_comm)
> {
> #if defined(MPI_IN_PLACE_EXISTS) || defined(GMX_THREADS)
>  MPI_Allreduce(MPI_IN_PLACE,r,nr,MPI_FLOAT,MPI_SUM,mpi_comm);
> #else
>  /* this function is only used in code that is not performance
> critical,
> (during setup, when comm_rec is not the appropriate
> communication
> structure), so this isn't as bad as it looks. */
>  float *buf;
>  int i;
> 
>  snew(buf, nr);
>  MPI_Allreduce(r,buf,nr,MPI_FLOAT,MPI_SUM,mpi_comm);
>  for(i=0; i  r[i] = buf[i];
>  sfree(buf);
> #endif
> }
> 
> Clearly the comment is out of date. My nstlist=5, repl_ex_nst=2500 and
> nstcalcenergy=-1, so that triggers gs.nstms=5 and so bInterSimGS is
> TRUE
> every 5 steps. I'm not sure whether the problem is with nstlist, or the
> multi-simulation checkpointing engineering, or what.
> 
> Mark

So are you saying that this code itself is slow (and called frequently), or 
this is showing the latency in synchronising replicas? If the latter, then 
presumably if you comment this out (or adjust nstlist or whatever), then it 
will just defer to the latency to the REMD call itself?
(I'll check my own example in due course, but our systems happen to be down 
this weekend.)

Martyn

--
Scanned by iCritical.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Simulation time losses with REMD

2011-01-29 Thread Mark Abraham

On 30/01/2011 10:26 AM, martyn.w...@stfc.ac.uk wrote:



-Original Message-
From: gmx-users-boun...@gromacs.org [mailto:gmx-users-
boun...@gromacs.org] On Behalf Of Mark Abraham
Sent: 29 January 2011 08:24
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] Simulation time losses with REMD

On 28/01/2011 4:46 PM, Mark Abraham wrote:

Hi,

I compared the .log file time accounting for same .tpr file run alone
in serial or as part of an REMD simulation (with each replica on a
single proessor). It ran about 5-10% slower in the latter. The effect
was a bit larger when comparing the same .tpr on 8 processors with
REMD with 8 processers per replica. The effect seems fairly
independent of whether I compare the lowest or highest replica.

OK I found the issue by binary-searching the code looking for the
offending line. It's in compute_globals() in src/kernel/md.c. The call
to gmx_sum_sim consumes all the extra time. This code is taking care of
synchronization for possibly doing checkpointing.

  if (MULTISIM(cr)&&  bInterSimGS)
  {
  if (MASTER(cr))
  {
  /* Communicate the signals between the
simulations */
  gmx_sum_sim(eglsNR,gs_buf,cr->ms);
  }
  /* Communicate the signals form the master to the
others */
  gmx_bcast(eglsNR*sizeof(gs_buf[0]),gs_buf,cr);
  }

This eventually calls

void gmx_sumf_comm(int nr,float r[],MPI_Comm mpi_comm)
{
#if defined(MPI_IN_PLACE_EXISTS) || defined(GMX_THREADS)
  MPI_Allreduce(MPI_IN_PLACE,r,nr,MPI_FLOAT,MPI_SUM,mpi_comm);
#else
  /* this function is only used in code that is not performance
critical,
 (during setup, when comm_rec is not the appropriate
communication
 structure), so this isn't as bad as it looks. */
  float *buf;
  int i;

  snew(buf, nr);
  MPI_Allreduce(r,buf,nr,MPI_FLOAT,MPI_SUM,mpi_comm);
  for(i=0; i
So are you saying that this code itself is slow (and called frequently), or 
this is showing the latency in synchronising replicas? If the latter, then 
presumably if you comment this out (or adjust nstlist or whatever), then it 
will just defer to the latency to the REMD call itself?
(I'll check my own example in due course, but our systems happen to be down 
this weekend.)


I've already controlled for the REMD cost and latency. The issue is what 
is causing the extra delay.


I've worked out what the issue is, and I'll move this thread to a 
Redmine issue - http://redmine.gromacs.org/issues/691


Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists