We are running SUSE Linux

It's best to keep all of this on the mailing list in case it becomes useful to somebody else.


jayant james wrote:
Hi!
thanks for your mail. I have never used pull code before and so I am a bit apprehensive but I do accept your suggestion and I am working on that. By the wat what is the OS that you are using? Is it Suse or Fedora?
Thanks
Jayant James

On Thu, Jun 25, 2009 at 4:31 PM, Chris Neale <chris.ne...@utoronto.ca <mailto:chris.ne...@utoronto.ca>> wrote:

    Let me re-emphasize that the pull code may be a good solution for you.

    As per your request, I currently use the following without any
    problems:

    fftw 3.1.2
    gromacs 3.3.1 or 4.0.4
    openmpi 1.2.6

    Be especially aware that openmpi 1.3.0 and 1.3.1 are broken, as I
    posted here:

    http://lists.gromacs.org/pipermail/gmx-users/2009-March/040844.html


    To be clear, I have never experienced any openmpi-based problems
    with any version of gromacs 4 and openmpi 1.2.6.

    I posted the original notice of our problems with openmpi (1.2.1)
    that were solved by using lam here.
    http://www.mail-archive.com/gmx-users@gromacs.org/msg08257.html

    Chris

    jayant james wrote:

        Hi!
        thanks for your mail. Could you please share what OS and
        versions of fftw, openmpi and gmx you are currently using.
        Thanks you
        JJ

        On Thu, Jun 25, 2009 at 12:28 PM, <chris.ne...@utoronto.ca
        <mailto:chris.ne...@utoronto.ca>
        <mailto:chris.ne...@utoronto.ca
        <mailto:chris.ne...@utoronto.ca>>> wrote:

           Why not use the pull code? If you haev to use distance
        restraints,
           then try LAM mpi with your pd run. We had similar error
        messages
           with vanilla .mdp files using openmpi with large and complex
           systems that went away when we switched to LAM MPI. Our
        problems
           disappeared in gmx 4 so we went back to openmpi for all
        systems as
           that mdrun_mpi version is faster in our hands.

           I admit, there is no good reason why LAM would work and openMPI
           would not, but I have seen it happen before so it's worth a
        shot.

           -- original message--

           The energy minimization went on without any problem on 4
           processors but the
           problem occurs when I perform the MD run. Also, I did not
        get any
           error
           message with relevance to LINCS etc...
           JJ

           _______________________________________________
           gmx-users mailing list    gmx-users@gromacs.org
        <mailto:gmx-users@gromacs.org>
           <mailto:gmx-users@gromacs.org <mailto:gmx-users@gromacs.org>>

           http://lists.gromacs.org/mailman/listinfo/gmx-users
           Please search the archive at http://www.gromacs.org/search
        before
           posting!
           Please don't post (un)subscribe requests to the list. Use
        thewww
           interface or send it to gmx-users-requ...@gromacs.org
        <mailto:gmx-users-requ...@gromacs.org>
           <mailto:gmx-users-requ...@gromacs.org
        <mailto:gmx-users-requ...@gromacs.org>>.

           Can't post? Read http://www.gromacs.org/mailing_lists/users.php




-- Jayasundar Jayant James

        www.chick.com/reading/tracts/0096/0096_01.asp
        <http://www.chick.com/reading/tracts/0096/0096_01.asp>
        <http://www.chick.com/reading/tracts/0096/0096_01.asp>)





--
Jayasundar Jayant James

www.chick.com/reading/tracts/0096/0096_01.asp <http://www.chick.com/reading/tracts/0096/0096_01.asp>)


_______________________________________________
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Reply via email to