On Thu, Apr 25, 2013 at 11:05 PM, XAvier Periole <x.peri...@rug.nl> wrote:
> > Thanks for the answer. I'll check gmx4.5.7 and report back. > > I am not sure what you mean by GROMACS swaps the coordinates not the > ensemble data. The coupling to P and T and not exchanged with it? The code in src/kernel/repl_ex.c: static void exchange_state(const gmx_multisim_t *ms, int b, t_state *state) { /* When t_state changes, this code should be updated. */ int ngtc, nnhpres; ngtc = state->ngtc * state->nhchainlength; nnhpres = state->nnhpres* state->nhchainlength; exchange_rvecs(ms, b, state->box, DIM); exchange_rvecs(ms, b, state->box_rel, DIM); exchange_rvecs(ms, b, state->boxv, DIM); exchange_reals(ms, b, &(state->veta), 1); exchange_reals(ms, b, &(state->vol0), 1); exchange_rvecs(ms, b, state->svir_prev, DIM); exchange_rvecs(ms, b, state->fvir_prev, DIM); exchange_rvecs(ms, b, state->pres_prev, DIM); exchange_doubles(ms, b, state->nosehoover_xi, ngtc); exchange_doubles(ms, b, state->nosehoover_vxi, ngtc); exchange_doubles(ms, b, state->nhpres_xi, nnhpres); exchange_doubles(ms, b, state->nhpres_vxi, nnhpres); exchange_doubles(ms, b, state->therm_integral, state->ngtc); exchange_rvecs(ms, b, state->x, state->natoms); exchange_rvecs(ms, b, state->v, state->natoms); exchange_rvecs(ms, b, state->sd_X, state->natoms); } I mis-stated last night - there *is* exchange of ensemble data, but it is incomplete. In particular, state->ekinstate is not exchanged. Probably it is incomplete because the 9-year-old comment about t_state changing is in a location that nobody changing t_state will see. And serializing a complex C data structure over MPI is tedious at best. But that is not really an excuse for the non-modularity GROMACS has for many of its key data structures. We are working on various workflow and actual code structure improvements to fix/prevent issues like this, but the proliferation of algorithms that ought to be inter-operable makes the job pretty hard. Other codes seem to exchange the ensemble label data (e.g. reference temperatures for T-coupling) because they write trajectories that are continuous with respect to atomic coordinates. I plan to move REMD in GROMACS to this approach, because it scales better, but it will not happen any time soon. That would explain what I see, but let see what 4.5.7 has to say first. > Great. It may be that there were other issues in 4.5.3 that exacerbated any REMD problem. Mark Tks. > > On Apr 25, 2013, at 22:40, Mark Abraham <mark.j.abra...@gmail.com> wrote: > > > Thanks for the good report. There have been some known issues about the > > timing of coupling stages with respect to various intervals between > GROMACS > > events for some algorithms. There are a lot of fixed problems in 4.5.7 > that > > are not specific to REMD, but I have a few lingering doubts about whether > > we should be exchanging (scaled) coupling values along with the > > coordinates. (Unlike most REMD implementations, GROMACS swaps the > > coordinates, not the ensemble data.) If you can reproduce those kinds of > > symptoms in 4.5.7 (whether or not they then crash) then there looks like > > there may be a problem with the REMD implementation that is perhaps > evident > > only with the kind of large time step Martini takes? > > > > Mark > > > > > > On Thu, Apr 25, 2013 at 1:28 PM, XAvier Periole <x.peri...@rug.nl> > wrote: > > > >> > >> Hi, > >> > >> I have been recently using the REMD code in gmx-407 and gmx-453 and got > a > >> few systems crashing for unclear reasons so far. The main tests I made > are > >> using gmx407 but it is all reproducible with gmx453. The crashing was > also > >> reproduced (not necessarily at the same time point) on several > >> architectures. > >> > >> The system is made of a pair of proteins in a membrane patch and for > which > >> the relative orientation is controlled by non-native > bond/angles/dihedrals > >> to perform an umbrella sampling. I use the MARTINI force field but that > >> might not be relevant here. > >> > >> The crashes occur following exchanges that do not seem to occur the > >> correct way and preceded by pressure scaling warnings … indicative of a > >> strong destabilisation of the system and eventual explosion. Some > >> information seems to be exchanged inaccurately. > >> > >> Trying to nail down the problem I got stuck and may be some one can > help. > >> I placed a pdf file showing plots of bonded/nonbonded energies, > >> temperatures, box size etc … around an exchange that does not lead to a > >> crash (here: md.chem.rug.nl/~periole/remd-issue.pdf). I plotted stuff > >> every step with the temperature colour coded as indicated in the first > >> figure. > >> > >> From the figure it appears that the step right after the exchange there > is > >> a huge jump of Potential energy coming from the LJ(SR) part of it. > Although > >> there are some small discontinuities in the progression of the bond and > >> angle energy around the exchange they seem to fine. The temperature and > box > >> size seem to respond to it a few step latter while the pressure seems > to be > >> affected right away but potentially as the Epot will affect the viral > and > >> thus the Pressure. > >> > >> The other potential clue is that the jumps reduce with the strength of > the > >> pressure coupling. A 1/2 ps tau_p (Berendsen) will lead to a crash > while a > >> 5/10/20 ps won't. Inspection of the time evolution of the Epot, box … > >> indicates that the magnitude of the jumps is reduced and the system ca > >> handle the problem. > >> > >> One additional info since I first posted the problem (delayed by the > file > >> first attached but now given with a link) the problem is accentuated > when > >> the replicas differ in conformation. I am looking at the actual > differences > >> as you'll read this email. > >> > >> That is as far as I could go. Any suggestion is welcome. > >> > >> XAvier. > >> MD-Group / Univ. of Groningen > >> The Netherlands-- > >> gmx-users mailing list gmx-users@gromacs.org > >> http://lists.gromacs.org/mailman/listinfo/gmx-users > >> * Please search the archive at > >> http://www.gromacs.org/Support/Mailing_Lists/Search before posting! > >> * Please don't post (un)subscribe requests to the list. Use the > >> www interface or send it to gmx-users-requ...@gromacs.org. > >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > > -- > > gmx-users mailing list gmx-users@gromacs.org > > http://lists.gromacs.org/mailman/listinfo/gmx-users > > * Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/Search before posting! > > * Please don't post (un)subscribe requests to the list. Use the > > www interface or send it to gmx-users-requ...@gromacs.org. > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > -- > gmx-users mailing list gmx-users@gromacs.org > http://lists.gromacs.org/mailman/listinfo/gmx-users > * Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/Search before posting! > * Please don't post (un)subscribe requests to the list. Use the > www interface or send it to gmx-users-requ...@gromacs.org. > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > -- gmx-users mailing list gmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! * Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists