On 10/16/12 9:38 AM, venkatesh s wrote:
Respected gromacs people,
while running mdrun-gpu -v -deffnm
nvt Following error i got , I searched in net forums and gmx-users
mailing list but ..?
So Kindly provide solution for that (grompp nvt step i com
Hi Claus,
> be supported in future versions. Yet, in the openmm website,
> the new version of openmm (3.0 that is) is supposed to support both cuda and
> opencl framework alongside gromacs:
> (https://simtk.org/project/xml/downloads.xml?group_id=161)
What do you mean by "alongside gromacs"?
> 1)
Valka ; Discussion list for GROMACS users
Gesendet: 13:19 Montag, 25.April 2011
Betreff: Re: [gmx-users] openmm 3.0, opencl support
Have you installed the
CUDA Toolkit 4.0 ?
I have never tried, just guessed.
lina
On Mon, Apr 25, 2011 at 9:17 AM, Claus Valka wrote:
Hello,
>
&
Have you installed the
CUDA Toolkit 4.0 ?
I have never tried, just guessed.
lina
On Mon, Apr 25, 2011 at 9:17 AM, Claus Valka wrote:
> Hello,
>
> I'm interested in knowing the level of development about gromacs supporting
> the opencl framework language.
>
> I have read the manual about gromac
Hello,
I'm interested in knowing the level of development about gromacs supporting the
opencl framework language.
I have read the manual about gromacs 4.5.4 that says that opencl is going to be
supported in future versions. Yet, in the openmm website,
the new version of openmm (3.0 that is) is
The two tables was the averages generated by the serial
code and by the parallel code (I have used, as you suggest
me, the option rerun of mdrun on the CPU to analyze the
energies obtained with the parallel simulation on
GPU)...Could you tell me if these results could be correct
and why?
I h
On 14/04/2010 7:10 PM, PACIELLO GIULIA wrote:
Well,
I have used the option -rerun to 'rerun' on CPU the simulation from the
parallel code. As you can see in the following tables, results are very
similar for non bonded terms, but not for the other energetic terms
(these are avereges obtained from
Well,
I have used the option -rerun to 'rerun' on CPU the
simulation from the parallel code. As you can see in the
following tables, results are very similar for non bonded
terms, but not for the other energetic terms (these are
avereges obtained from g_energy):
PARALLEL CODE
Angle: 4256
On 14/04/2010 5:18 PM, PACIELLO GIULIA wrote:
I dont'know if this is the right way to test that things converge to the
same value...but I have made an energy minimization on CPU of the two
output (parallel and serial). I have obtained this results:
FROM PARALLEL OUTPUT
Potential Energy = -1.6960
I dont'know if this is the right way to test that things
converge to the same value...but I have made an energy
minimization on CPU of the two output (parallel and
serial). I have obtained this results:
FROM PARALLEL OUTPUT
Potential Energy = -1.6960029e+04
Maximum force = 1.7282695e+03
On 14/04/2010 4:52 PM, PACIELLO GIULIA wrote:
Ok...so how could I know the energies among atoms?
Since it's not reported, you can't get breakdowns of energies. That's a
limitation from the use of GPU. If you want this information, perhaps do
your simulation on the GPU and re-rerun selected fr
Ok...so how could I know the energies among atoms? And how
can I test if my parallel code is running in the correct
manner?
Thanks,
Giulia
On Wed, 14 Apr 2010 16:43:13 +1000
Mark Abraham wrote:
On 14/04/2010 4:31 PM, PACIELLO GIULIA wrote:
Hi,
thanks a lot for your answer, but I have some d
On 14/04/2010 4:31 PM, PACIELLO GIULIA wrote:
Hi,
thanks a lot for your answer, but I have some doubts yet...
My .gro files are very different and I'm not sure that the architecture
(CPU / GPU) could influence the result so much as in my example (the
first is the output of the serial code and the
Hi,
thanks a lot for your answer, but I have some doubts
yet...
My .gro files are very different and I'm not sure that the
architecture (CPU / GPU) could influence the result so
much as in my example (the first is the output of the
serial code and the second that of the parallel ones).
SERIA
Hi,O
On 04/13/2010 04:23 PM, PACIELLO GIULIA wrote:
Even if I have followed all the instructions reported for the correct
installation of GPU, CUDA, OpenMM and even if my .mdp file for the
simulation is written following the features supported in the release
Gromacs-OpenMM, the output of the s
Hi all,
I'm using the Application Programming Interface openMM for
executing molecular dynamics simulations with Gromacs on
an NVIDIA GTX 295 GPU (here is installed CUDA SDK for
issuing and managing computations on GPU).
Even if I have followed all the instructions reported for
the correct i
Alan wrote:
Hi list, does anyone have an example (input pdb, gmx commands and
md.mdp for example) to test gromacs with and without openmm?
The case I use here (with explicit water) didn't show me any speed up
(comparing with mpirun -c 2 mdrun_mpi...).
I am using Mac Leopard 10.5.7 with Cuda dri
Hi list, does anyone have an example (input pdb, gmx commands and
md.mdp for example) to test gromacs with and without openmm?
The case I use here (with explicit water) didn't show me any speed up
(comparing with mpirun -c 2 mdrun_mpi...).
I am using Mac Leopard 10.5.7 with Cuda drivers and groma
18 matches
Mail list logo