This works well until you use a system that permits job suspension. Then
-maxh gets double-crossed... :-)
Mark
On Apr 25, 2013 3:41 PM, "Richard Broadbent" <
richard.broadben...@imperial.ac.uk> wrote:
> I generally build a tpr for the whole simulation then submit one job using
> a command such as
The salvation is to use:
mdrun -cpi state.cpt
Dr. Vitaly Chaban
On Thu, Apr 25, 2013 at 2:37 PM, Justin Lemkul wrote:
>
>
>> Can any body tell me how do it split script i such that i will get all
>> 20ns simulation
>>
>>
> You specified a given time limit for the job, and the run exceede
I generally build a tpr for the whole simulation then submit one job
using a command such as:
mpirun -n ${NUM_PROCESSORS} mdrun -deffnm ${NAME} -maxh
${WALL_TIME_IN_HOURS}
copy all the files back at the end of the script if necessary then:
then resubmit it (sending out all the files again if
You can split the simulation in different part (for example 5 ns each),
every time you'll finish one you will extend it adding more time.
http://www.gromacs.org/Documentation/How-tos/Extending_Simulations?highlight=extend
My cluster uses a different "script system" than yours so I can't help
with
On 4/25/13 8:28 AM, Sainitin Donakonda wrote:
Hey all,
I recently ran 20ns simulation of protein ligand complex on cluster i used
following script to run simulation
grompp -f MD.mdp -c npt.gro -t npt.cpt -p topol.top -n index.ndx -o
md_test.tpr
mpirun -n 8 mdrun -s md_test.tpr -deffnm md_tes
Hey all,
I recently ran 20ns simulation of protein ligand complex on cluster i used
following script to run simulation
grompp -f MD.mdp -c npt.gro -t npt.cpt -p topol.top -n index.ndx -o
md_test.tpr
mpirun -n 8 mdrun -s md_test.tpr -deffnm md_test -np 8
*I saved this as MD.sh And then submited
On Thu, Jan 24, 2013 at 7:09 PM, Christoph Junghans wrote:
> > Date: Thu, 24 Jan 2013 01:07:04 +0100
> > From: Szil?rd P?ll
> > Subject: Re: [gmx-users] Problem with gromacs-4.6 compilation on
> > Debian
> > To: Discussion list for
> Date: Thu, 24 Jan 2013 15:55:13 +0100
> From: Szil?rd P?ll
> Subject: Re: [gmx-users] Problem with gromacs-4.6 compilation on
> Debian
> To: Discussion list for GROMACS users
> Message-ID:
>
> Content-Type: text/plain; charset=ISO-8859-1
>
&
On Thu, Jan 24, 2013 at 6:48 PM, James Starlight wrote:
> Justin, Szilárd, thanks for suggestion!
>
> It will be easily for me to found a better card :)
>
> By the way in other topics some developments told me that the Plumed
> plugin for methadynamics will be realised in gromacs 4.6. I've checked
> Date: Thu, 24 Jan 2013 01:07:04 +0100
> From: Szil?rd P?ll
> Subject: Re: [gmx-users] Problem with gromacs-4.6 compilation on
> Debian
> To: Discussion list for GROMACS users
> Message-ID:
>
> Content-Type: text/plain; charset=ISO-8859-1
>
>
Justin, Szilárd, thanks for suggestion!
It will be easily for me to found a better card :)
By the way in other topics some developments told me that the Plumed
plugin for methadynamics will be realised in gromacs 4.6. I've checked
for it in manual but could not find any notions about it . Have it
Let me add two more things.
Note that we *always* compare performance and acceleration to our
super-tuned state-of-the-art CPU code, which I can confidently say that is
among the fastest if not the fastest to date, and never to some slow (CPU)
implementation. Therefore, while other codes might be
On Thu, Jan 24, 2013 at 3:28 PM, Justin Lemkul wrote:
>
>
> On 1/24/13 9:23 AM, James Starlight wrote:
>
>> oh that was simply solved by upgrading of G++ :)
>>
>> the only problem which remains is the missing of support of mu GPU :(
>> That time I've tried to start simulation on simply 2 cores CP
On 1/24/13 9:23 AM, James Starlight wrote:
oh that was simply solved by upgrading of G++ :)
the only problem which remains is the missing of support of mu GPU :(
That time I've tried to start simulation on simply 2 cores CPU + geforce 9500
From md run I've obtained
NOTE: Error occurred dur
ectly fine solution if you get
>> the new (enough version of the) standard C++ library by doing so.
>>
>> Just wanted to clarify this for users bumping into this issue later.
>>
>> Cheers,
>>
>> --
>> Szilárd
>>
>>
>> On Wed, Jan
ify this for users bumping into this issue later.
>
> Cheers,
>
> --
> Szilárd
>
>
> On Wed, Jan 23, 2013 at 5:47 PM, Ricardo wrote:
>
>> On 01/22/2013 06:02 PM, Christoph Junghans wrote:
>>
>>> Message: 5
>>>> Date: Tue, 22 Jan 2013 19:42:01 +0100
&g
ge: 5
>>> Date: Tue, 22 Jan 2013 19:42:01 +0100
>>> From: Szil?rd P?ll
>>> Subject: Re: [gmx-users] Problem with gromacs-4.6 compilation on
>>> Debian
>>> To: Discussion list for GROMACS users
>>> Message-ID:
>>> >>
On 01/22/2013 06:02 PM, Christoph Junghans wrote:
Message: 5
Date: Tue, 22 Jan 2013 19:42:01 +0100
From: Szil?rd P?ll
Subject: Re: [gmx-users] Problem with gromacs-4.6 compilation on
Debian
To: Discussion list for GROMACS users
Message-ID:
Content-Type: text/plain; charset=ISO
> Message: 5
> Date: Tue, 22 Jan 2013 19:42:01 +0100
> From: Szil?rd P?ll
> Subject: Re: [gmx-users] Problem with gromacs-4.6 compilation on
> Debian
> To: Discussion list for GROMACS users
> Message-ID:
>
> Content-Type: text/plain; charset=ISO-8859
Szilárd,
thanks for suggestion. Indeed I've noticed that versions of all
packages in classic debian are older in comparison to debian mint (
despite I've done maximum upgrade of the system via apt). Tomorrow
I'll try to install newest gcc and glibc and re-install gromacs.
James
2013/1/22 Szilár
On Tue, Jan 22, 2013 at 12:45 PM, James Starlight wrote:
> Szilárd,
>
> Today I've tried to re-install cuda+gromacs and do apt-get
> distr-upgrade but the same error was obtained during gromacs
>
I'm don't see how does the distribution upgrade relate to the issues you
had (except if you have upda
Szilárd,
Today I've tried to re-install cuda+gromacs and do apt-get
distr-upgrade but the same error was obtained during gromacs
compilation. By the way where I could provide --add-needed option ?
James
2013/1/21 Szilárd Páll :
> Szilárd
--
gmx-users mailing listgmx-users@gromacs.org
http:
Hi,
Not sure why, but it looks like libcudart.so is linked against a glibc that
not compatible with what you have (perhaps much newer)?
Alternatively you could try adding "--add-needed" to the linker flags, but
I doubt it will help.
Cheers,
--
Szilárd
On Mon, Jan 21, 2013 at 5:09 PM, James St
Dear All,
I am trying to install Gromacs 4.6 with -DGMX_OPENMM=ON
I am getting the following errors in make install-mdrun
../mdlib/libmd_openmm.so.6: undefined reference to `omp_get_thread_num'
../mdlib/libmd_openmm.so.6: undefined reference to `omp_get_num_threads'
../mdlib/libmd_openmm.so.6: u
Dear Szilárd
I have downloaded Gromacs 4.6 from git. But I saw that implicit
solvent feature is still not supported.
Features currently not supported by the new GPU and SSE kernels:
Implicit solvent (but this will still be supported on the GPU through OpenMM)
But I need the implicit solvent fe
Dear Jesmin,
On Tue, Aug 21, 2012 at 7:21 PM, jesmin jahan wrote:
> Dear All,
>
> I have installed gromacs 4.5.3 on a cluster. I downloaded the
> gromacs-4.5-GPU-beta2-X86_64 binaries and followed the following
> instructions:
Those binaries are extremely outdated. Please compile Gromacs form t
Dear All,
I have installed gromacs 4.5.3 on a cluster. I downloaded the
gromacs-4.5-GPU-beta2-X86_64 binaries and followed the following
instructions:
"
* INSTALLING FROM BINARY DISTRIBUTION:
0. Prerequisites:
- OpenMM (included in the binary release)
- NVIDIA CUDA libraries (version >=3
Hello,
Thanks for your advice. I use "make distclean" before configure. But an error
appeared in the same place.
cd gromacs-4.5.5
make distclean
./configure --disable-threads
make
.libs/xlate.o:xlate.c:(.text+0xa9b): undefined reference to `_put_symtab'
.libs/xlate.o:xlate.c:(.text+0xb3a): u
7: recipe for target `all-recursive' failed
make: *** [all-recursive] Error 1
What have I done now?
Sylwia Chmielewska
- Oryginalna wiadomość -
Od: "Mark Abraham"
Do: "Discussion list for GROMACS users"
Wysłane: niedziela, 8 styczeń 2012 23:37:42
Temat: Re: [gmx-user
Error 2
make[1]: Leaving directory `/home/Sylwia/gromacs-4.5.5/src'
Makefile:347: recipe for target `all-recursive' failed
make: *** [all-recursive] Error 1
What have I done now?
Sylwia Chmielewska
- Oryginalna wiadomość -
Od: "Mark Abraham"
Do: "Discussion list fo
On 9/01/2012 3:44 AM, Sylwia Chmielewska wrote:
Hello
Folder with the program GROMACS save in a folder cygwin / home / Sylwia.
then:
$ cd gromacs-4.5.5
./configure --enable-sse --enable-float
no errors occurred only at the end:
configure: WARNING: unrecognized options: - enable-sse
You'll not
Hello
Folder with the program GROMACS save in a folder cygwin / home / Sylwia.
then:
$ cd gromacs-4.5.5
./configure --enable-sse --enable-float
no errors occurred only at the end:
configure: WARNING: unrecognized options: - enable-sse
make
no errors occurred only at the end:
numa_malloc.c:117:7
Dear All,
I installed gromacs-4.5.3 using cygwin on windows 7, following the instructions
on http://www.gromacs.org/Downloads/Installation_Instructions/Cygwin_HOWTO.
However, after installation, when I tried to run gromacs, I couldn't find the
"share" folder in D:/cygwin/usr/local/gromacs. This
Hi Jorge,
I'll appreciate if you can send me (biswas...@gmail.com) the following files
if the problem still persists:
1. output.mdrun_em
2. qm_cpmd.log
Also please let me know which version of CPMD you are using.
best,
pb.
On Thu, Nov 19, 2009 at 1:59 PM, wrote:
> Dear all,
>
> I'm running som
Dear all,
I'm running some simulations using Gromacs/CPMD but it doesn't continue
during QMCONTINUE file lecture. See below:
EXTERNAL ENERGY= 5.867019924829098E-002 AU
REAL TOTAL ENERGY = -97.3517503273190 AU
ATOM COORDINATESGRADIENTS (-FORCES)
1 C
Hi Anna,
apart from what Mark already mentioned, you might want to investigate
if the network is the bottleneck. What kind of network do you use? If
it is Gbit-Ethernet, you could directly back-to-back connect two
nodes and see if the scaling on 4 CPUs thereby gets significantly
better. F
Anna Marabotti wrote:
Dear all,
we installed GROMACS v. 3.3.2 on a cluster formed by 20 biprocessor nodes with
Centos 4 x86_64, following instructions on the
GROMACS web site. We compiled it in single precision, parallel version using
the --enable-mpi option (LAM MPI was already present on
the
Dear all,
we installed GROMACS v. 3.3.2 on a cluster formed by 20 biprocessor nodes with
Centos 4 x86_64, following instructions on the
GROMACS web site. We compiled it in single precision, parallel version using
the --enable-mpi option (LAM MPI was already present on
the cluster). After the fina
38 matches
Mail list logo