Hi Matt,
Yes, you should use the "force-device=yes" option, the patch which was
meant to update the list of compatible GPUs didn't make it for 4.5.5.
Cheers,
--
Szilárd
On Sun, Oct 23, 2011 at 10:24 PM, Matt Larson wrote:
> I am having an error trying to use a compiled mdrun-gpu on my GPU set
Please keep all discussions on the mailing list! Also, I'm also CC-ing
to the gmx-devel list, maybe somebody over there has an better what
causes your CMake issue.
>>> //Flags used by the compiler during all build types
>>> CMAKE_C_FLAGS:STRING=' -msse2 -ip -funroll-all-loops -std=gnu99 '
>>>
>>>
I've just realized that both you and the similar report you linked to
were using CMake 2.8.3. If you don't succeed could you try another
CMake version?
--
Szilárd
On Mon, Oct 24, 2011 at 11:14 PM, Szilárd Páll wrote:
> Please keep all discussions on the mailing list! Also, I'
Hi,
Firstly, you're not using the latest version and there might have been
a fix for your issue in the 4.5.5 patch release.
Secondly, you should check the http://redmine.gromacs.org bugtracker
to see what bugs have been fixed in 4.5.5 (ideally the target version
should tell). You can also just do
Hi,
> Thank you very much. I removed -xHOST from CFLAGS and FFLAGS, and now it
> runs correctly.
Good that it worked. Still it's bizarre that icc failed at compiling
the code it generated...
FYI: removing the flag might result in slightly slower binaries, but
the difference should be quite small
oint have no atoms in
> VMD. So, that's probably not a good thing.
Wow, that sounds crazy. What driver version are you using? Try to
update your device driver + nvidia-settings - I've been using
285.05.05/09 without problems.
--
Szilárd
> Thanks,
> Matt
>
> On Mon,
ind entering it as a bug in redmine.gromacs.org. I'll look
into the issue in the coming days.
Cheers,
--
Szilárd
On Sat, Oct 29, 2011 at 11:56 PM, Mirco Wahab
wrote:
> On 24.10.2011 23:23, Szilárd Páll wrote:
>>
>> I've just realized that both you and the similar repor
On Mon, Oct 31, 2011 at 1:06 PM, Mark Abraham wrote:
> On 30/10/2011 8:56 AM, Mirco Wahab wrote:
>>
>> On 24.10.2011 23:23, Szilárd Páll wrote:
>>>
>>> I've just realized that both you and the similar report you linked to
>>> were using CMake 2.
Hi,
There have been quite some discussion on the topic of GROMACS on
Cygwin so please search the mailing list for information.
Some of that information might have not gone into the wiki
(http://goo.gl/ALQuC) - especially that the page appears to be intact
for the last 7 months. [Which is a pity a
On Tue, Nov 8, 2011 at 11:59 PM, Mark Abraham wrote:
> On 8/11/2011 11:35 PM, Szilárd Páll wrote:
>
>> Hi,
>>
>> There have been quite some discussion on the topic of GROMACS on
>> Cygwin so please search the mailing list for information.
>>
>
>
Hi,
I don't remember any incident related to tools crashing, but I do
recall a problem which initially was attributed to a known gcc 4.1 bug
(http://redmine.gromacs.org/issues/431), but it turned out to be a GB
bug.
However, knowing that there is such a nasty bug in gcc 4.1, we thought
it's bette
Hi Andrzej,
GROMACS 4.6 is work in progress, it will have native CUDA acceleration
with multi-GPU support along a few other improvements. You can expect
a speedup in the ballpark of 3x. We will soon have the code available
for testing.
I'm a little skeptical about the 5x of ACEMD. What setting di
n 2011-11-27 12:10:47PM -0600, Szilárd Páll wrote:
>> Hi Andrzej,
>>
>> GROMACS 4.6 is work in progress, it will have native CUDA acceleration
>> with multi-GPU support along a few other improvements. You can expect
>> a speedup in the ballpark of 3x. We will soon hav
> Will it use CUDA or OpenCL? Second one will be more common since it will
> work with wider range of platfroms (cpu, gpu, fcpga)
>
> Szilárd Páll писал 27.11.2011 23:50:
>>
>> Native acceleration = not relying on external libraries. ;)
>>
>> --
>> Szilárd
allation and usage?
>
> Thanks a lot!
>
> Sincerely yours,
>
> Jones
>
>
>>
>> --
>> Szilárd
>>
>>
>>
>> On Sun, Nov 27, 2011 at 11:25 PM, Alexey Shvetsov
>> wrote:
>> > Hi!
>> >
>&g
Thanks for the info.
--
Szilárd
On Tue, Nov 29, 2011 at 11:50 AM, Andrzej Rzepiela
wrote:
> Hey,
>
> Thank you for the info. The data that I obtained for comparison was
> performed with GTX580, 4 fs timestep and heavy hydrogen atoms instead of
> constraints, as you suspected. For dhfr with PME
Hi Andrzej,
> One more question: will a ratio of gpu/cpu units and cores be of importance
> in next gromacs releases ? at the moment the code uses one core per gpu
> unit, wright ? When the code is gpu parallel how can this change ?
Yes, it will. We use both CPU & GPU and load balance between the
Hi,
I've personally never heard of anybody using gromacs compiled with PGI.
> I am using a new cluster of Xeons and, to get the most efficient
> compilation, I have compiled gromacs-4.5.4 separately with the intel,
> pathscale, and pgi compilers.
I did try Pathscale a few months ago and AFAIR it
On Thu, Dec 1, 2011 at 4:49 PM, Teemu Murtola wrote:
> On Thu, Dec 1, 2011 at 16:46, Szilárd Páll wrote:
>>> With the pgi compiler, I am most concerned about this floating point
>>> overflow warning:
>>>
>>> ...
>>> [ 19%] Building C object s
Hi,
Pathscale seems to be as fast as gcc 4.5 on AMD Barcelona and the
-march=barcelona option unfortunately doesn't seem to help much.
However, I didn't try any other compiler optimization options.
We do have several Magny-Cours machines around we can benchmark on,
but thanks for the offer!
Chee
Hi,
I tried a Pathscale 4.0.12 nightly and except a few warnings
compilation went fine. I don't have 4.0.11 around, though.
However, mdrun segfaults at the very end of the run while generating
the cycle and time counter table. I don't have time to look into this,
but I'll get back to the issue wh
palardo
> Dept. Quimica Fisica, Univ. de Sevilla (Spain)
>
> On Tue, 29 Nov 2011 22:04:08 +0100, Szilárd Páll wrote:
>>
>> Hi Andrzej,
>>
>>> One more question: will a ratio of gpu/cpu units and cores be of
>>> importance
>>> in next gromacs releases
There's no ibverbs support, s o pick your favorite/best MPI
implementation, more than that you can't do.
--
Szilárd
On Mon, Jun 3, 2013 at 2:54 PM, Bert wrote:
> Dear all,
>
> My cluster has a FDR (56 Gb/s) Infiniband network. It is well known that
> there is a big difference between using IPoIB
mdrun is not blind, just the current design does report the hardware
of all compute nodes used. Whatever CPU/GPU hardware mdrun reports in
the log/std output is *only* what rank 0, i.e. the first MPI process,
detects. If you have a heterogeneous hardware configuration, in most
cases you should be a
"-nt" is mostly a backward compatibility option and sets the total
number of threads (per rank). Instead, you should set both "-ntmpi"
(or -np with MPI) and "-ntomp". However, note that unless a single
mdrun uses *all* cores/hardware threads on a node, it won't pin the
threads to cores. Failing to
Just a few minor details:
- You can set the affinities yourself through the job scheduler which
should give nearly identical results compared to the mdrun internal
affinity if you simply assign cores to mdrun threads in a sequential
order (or with an #physical cores stride if you want to use
Hyper
On Sat, Jun 8, 2013 at 9:21 PM, Albert wrote:
> Hello:
>
> Recently I found a strange question about Gromacs-4.6.2 on GPU workstaion.
> In my GTX690 machine, when I run md production I found that the ECC is on.
> However, in my another GTX590 machine, I found the ECC was off:
>
> 4 GPUs detected:
On Wed, Jun 5, 2013 at 4:35 PM, João Henriques
wrote:
> Just to wrap up this thread, it does work when the mpirun is properly
> configured. I knew it had to be my fault :)
>
> Something like this works like a charm:
> mpirun -npernode 2 mdrun_mpi -ntomp 8 -gpu_id 01 -deffnm md -v
That is indeed t
Amil,
It looks like there is a mixup in your software configuration and
mdrun is linked against libguide.so, the OpenMP library part of the
Intel compiler v11 which gets loaded early and is probably causing the
crash. This library was probably pulled in implicitly by MKL which the
build system det
-missing-field-initializers
> -Wno-sign-compare -Wall -Wno-unused -Wunused-value -fomit-frame-pointer
> -funroll-all-loops -fexcess-precision=fast -O3 -DNDEBUG
>
>
> All the regressiontests failed. So it appears that, at least for my system,
> I need to include the direc
Dear Ramon,
Compute capability does not reflect the performance of a card, but it
is an indicator of what functionalities does the GPU provide - more
like a generation number or feature set version.
Quadro cards are typically quite close in performance/$ to Teslas with
roughly 5-8x *lower* "GROMA
I strongly suggest that you consider the single-chip GTX cards instead
of a dual-chip one; from the point of view of price/performance you'll
probably get the most from a 680 or 780.
You could ask why, so here are the reasons:
- The current parallelization scheme requires domain-decomposition to
u
On Sat, Jun 22, 2013 at 5:55 PM, Mirco Wahab
wrote:
> On 22.06.2013 17:31, Mare Libero wrote:
>>
>> I am assembling a GPU workstation to run MD simulations, and I was
>> wondering if anyone has any recommendation regarding the GPU/CPU
>> combination.
>> From what I can see, the GTX690 could be th
If you have a solid example that reproduced the problem, feel free to
file an issue on redmine.gromacs.org ASAP. Briefly documenting your
experiments and verification process on the issue report page can help
help developers in giving you faster feedback as well as with
accepting the report as a bu
Thanks Mirco, good info, your numbers look quite consistent. The only
complicating factor is that your CPUs are overclocked by different
amounts, which changes the relative performances somewhat compared to
non-overclocked parts.
However, let me list some prices to show that the top-of-the line AM
On Thu, Jun 27, 2013 at 12:57 PM, Mare Libero wrote:
> Hello everybody,
>
> Does anyone have any recommendation regarding the installation of gromacs 4.6
> on Ubuntu 12.04? I have the nvidia-cuda-toolkit that comes in synaptic
> (4.0.17-3ubuntu0.1 installed in /usr/lib/nvidia-cuda-toolkit) and t
FYI: 4.6.2 contains a bug related to thread affinity setting which
will lead to a considerable performance loss (I;ve seen 35%) as well
as often inconsistent performance - especially with GPUs (case in
which one would run many OpenMP threads/rank). My advice is that you
either use the code from git
autocomplete).
>
> I am still trying to fix the issues with the intel compiler. The gcc
> compiled version benchmark at 52ns/day with the lysozyme in water tutorial.
icc 12 and 13 should just work with CUDA 5.0.
Cheers,
--
Szilárd
>
> Thanks again.
>
>
On Mon, Jun 24, 2013 at 4:43 PM, Szilárd Páll wrote:
> On Sat, Jun 22, 2013 at 5:55 PM, Mirco Wahab
> wrote:
>> On 22.06.2013 17:31, Mare Libero wrote:
>>>
>>> I am assembling a GPU workstation to run MD simulations, and I was
>>> wondering if anyone has a
PS: the error message is referring the to *driver* version, not the
CUDA toolkit/runtime version.
--
Szilárd
On Tue, Jul 9, 2013 at 11:15 AM, Szilárd Páll wrote:
> Tesla C1060 is not compatible - which should be shown in the log and
> standard output.
>
> Cheers,
> --
> Sz
Tesla C1060 is not compatible - which should be shown in the log and
standard output.
Cheers,
--
Szilárd
On Tue, Jul 9, 2013 at 10:54 AM, Albert wrote:
> Dear:
>
> I've installed a gromacs-4.6.3 in a GPU cluster, and I obtained the
> following information for testing:
>
> NOTE: Using a GPU wit
On Tue, Jul 9, 2013 at 11:20 AM, Albert wrote:
> On 07/09/2013 11:15 AM, Szilárd Páll wrote:
>>
>> Tesla C1060 is not compatible - which should be shown in the log and
>> standard output.
>>
>> Cheers,
>> --
>> Szilárd
>
>
> THX for kind comme
Hi,
Is affinity setting (pinning) on? What compiler are you using? There
are some known issues with Intel OpenMP getting in the way of the
internal affinity setting. To verify whether this is causing a
problem, try turning of pinning (-pin off).
Cheers,
--
Szilárd
On Tue, Jul 9, 2013 at 5:29 PM
Just a note regarding the performance "issues" mentioned. You are
using reaction-field electrostatics case in which by default there is
very little force workload left for the CPU (only the bondeds) and
therefore the CPU idles most of the time. To improve performance, use
-nb gpu_cpu with multiple
FYI: The MKL FFT has been shown to be up to 30%+ slower than FFTW 3.3.
--
Szilárd
On Thu, Jul 11, 2013 at 1:17 AM, Éric Germaneau wrote:
> I have the same feeling too but I'm not in charge of it unfortunately.
> Thank you, I appreciate.
>
>
> On 07/11/2013 07:15 AM, Mark Abraham wrote:
>>
>> No
Depending on the level of parallelization (number of nodes and number
of particles/core) you may want to try:
- 2 ranks/node: 8 cores + 1 GPU, no separate PME (default):
mpirun -np 2*Nnodes mdrun_mpi [-gpu_id 01 -npme 0]
- 4 ranks per node: 4 cores + 1 GPU (shared between two ranks), no separat
The message is perfectly normal. When you do not use all available
cores/hardware threads (seen as "CPUs" by the OS), to avoid potential
clashes, mdrun does not pin threads (i.e. it lets the OS migrate
threads). On NUMA systems (most multi-CPU machines), this will cause
performance degradation as w
On Thu, Jul 25, 2013 at 5:55 PM, Mark Abraham wrote:
> That combo is supposed to generate a CMake warning.
>
> I also get a warning during linking that some shared library will have
> to provide some function (getpwuid?) at run time, but the binary is
> static.
That warning has always popped up f
On Fri, Jul 19, 2013 at 6:59 PM, gigo wrote:
> Hi!
>
>
> On 2013-07-17 21:08, Mark Abraham wrote:
>>
>> You tried ppn3 (with and without --loadbalance)?
>
>
> I was testing on 8-replicas simulation.
>
> 1) Without --loadbalance and -np 8.
> Excerpts from the script:
> #PBS -l nodes=8:ppn=3
> seten
Dear Ramon,
Thanks for the kind words!
On Tue, Jun 18, 2013 at 10:22 AM, Ramon Crehuet Simon
wrote:
> Dear Szilard,
> Thanks for your message. Your help is priceless and helps advance science
> more than many publications. I extend that to many experts who kindly and
> promptly answer question
Hi,
The Intel compilers are only recommended for pre-Bulldozer AMD
processors (K10: Magny-Cours, Intanbul, Barcelona, etc.). On these,
PME non-bonded kernels (not the RF or plain cut-off!) are 10-30%
slower with gcc than with icc. The icc-gcc difference is the smallest
with gcc 4.7, typically arou
erties to the MB should I consider for such system ?
>>
>> James
>>
>>
>> 2013/5/28 lloyd riggs
>>
>>> Dear Dr. Pali,
>>>
>>> Thank you,
>>>
>>> Stephan Watkins
>>>
>>> *Gesendet:* Dienstag, 28. Mai 2013 um
That should never happen. If mdrun is compiled with GPU support and
GPUs are detected, the detection stats should always get printed.
Can you reliably reproduce the issue?
--
Szilárd
On Fri, Aug 2, 2013 at 9:50 AM, Jernej Zidar wrote:
> Hi there.
> Lately I've been running simulations using G
I may have just come across this issue as well. I have no time to
investigate, but my guess is that it's related to some thread-safety
issue with thread-MPI.
Could one of you please file a bug report on redmine.gromacs.org?
Cheers,
--
Szilárd
On Thu, Aug 8, 2013 at 5:52 PM, Brad Van Oosten wro
On Thu, Aug 29, 2013 at 7:18 AM, Gianluca Interlandi
wrote:
> Justin,
>
> I respect your opinion on this. However, in the paper indicated below by BR
> Brooks they used a cutoff of 10 A on LJ when testing IPS in CHARMM:
>
> Title: Pressure-based long-range correction for Lennard-Jones interactions
On Tue, Sep 3, 2013 at 9:50 PM, Guanglei Cui
wrote:
> Hi Mark,
>
> I agree with you and Justin, but let's just say there are things that are
> out of my control ;-) I just tried SSE2 and NONE. Both failed the
> regression check.
That's alarming, with GMX_CPU_ACCELERATION=None only the plain C
ker
HI,
First of all, icc 11 is not well tested and there have been reports
about it compiling broken code. This could explain the crash, but
you'd need to do a bit more testing to confirm. Regading the GPU
detection error, if you use a driver which is incompatible with the
CUDA runtime (at least as h
FYI, I've file a bug report which you can track if interested:
http://redmine.gromacs.org/issues/1334
--
Szilárd
On Sun, Sep 1, 2013 at 9:49 PM, Szilárd Páll wrote:
> I may have just come across this issue as well. I have no time to
> investigate, but my guess is that it's
le to judge what is causing
the problem.
Cheers,
--
Szilárd
> Best regards,
> Guanglei
>
>
> On Mon, Sep 9, 2013 at 4:35 PM, Szilárd Páll wrote:
>
>> HI,
>>
>> First of all, icc 11 is not well tested and there have been reports
>> about it compiling broken
Looks like you are compiling 4.5.1. You should try compiling the
latest version in the 4.5 series, 4.5.7.
--
Szilárd
On Sun, Sep 15, 2013 at 6:39 PM, Muthukumaran R wrote:
> hello,
>
> I am trying to install gromacs in cygwin but after issuing "make",
> installation stops with the following erro
On Mon, Sep 16, 2013 at 7:04 PM, PaulC wrote:
> Hi,
>
>
> I'm attempting to build GROMACS 4.6.3 to run entirely within a single Xeon
> Phi (i.e. native) with either/both Intel MPI/OpenMP for parallelisation
> within the single Xeon Phi.
>
> I followed these instructions from Intel for cross compil
Hi,
Admittedly, both the documentation on these features and the
communication on the known issues with these aspects of GROMACS has
been lacking.
Here's a brief summary/explanation:
- GROMACS 4.5: implicit solvent simulations possible using mdrun-gpu
which is essentially mdrun + OpenMM, hence it
e are
a few analysis tools that support OpenMP and even with those I/O will
be a severe bottleneck if you were considering using the Phi-s for
analysis.
So for now, I would stick to using only the CPUs in the system.
Cheers,
--
Szilárd Páll
On Thu, Oct 10, 2013 at 12:58 PM, Arun Sharma
Hi Carsten,
On Thu, Oct 24, 2013 at 4:52 PM, Carsten Kutzner wrote:
> On Oct 24, 2013, at 4:25 PM, Mark Abraham wrote:
>
>> Hi,
>>
>> No. mdrun reports the stride with which it moves over the logical cores
>> reported by the OS, setting the affinity of GROMACS threads to logical
>> cores, and wa
That should be enough. You may want to use the -march (or equivalent)
compiler flag for CPU optimization.
Cheers,
--
Szilárd Páll
On Sun, Nov 3, 2013 at 10:01 AM, James Starlight wrote:
> Dear Gromacs Users!
>
> I'd like to compile lattest 4.6 Gromacs with native GPU supporting
Brad,
These numbers seems rather low for a standard simulation setup! Did
you use a particularly long cut-off or short time-step?
Cheers,
--
Szilárd Páll
On Fri, Nov 1, 2013 at 6:30 PM, Brad Van Oosten wrote:
> Im not sure on the prices of these systems any more, they are getting dated
&
hine configurations before buying. (Note
that I have never tried it myself, so I can't provide more details or
vouch for it in any way.)
Cheers,
--
Szilárd Páll
On Fri, Nov 1, 2013 at 3:08 AM, David Chalmers
wrote:
> Hi All,
>
> I am considering setting up a small cluster to run Gr
You can use the "-march=native" flag with gcc to optimize for the CPU
your are building on or e.g. -march=corei7-avx-i for Intel Ivy Bridge
CPUs.
--
Szilárd Páll
On Mon, Nov 4, 2013 at 12:37 PM, James Starlight wrote:
> Szilárd, thanks for suggestion!
>
> What kind of CPU op
Timo,
Have you used the default settings, that is one rank/GPU? If that is
the case, you may want to try using multiple ranks per GPU, this can
often help when you have >4-6 cores/GPU. Separate PME ranks are not
switched on by default with GPUs, have you tried using any?
Cheers,
--
Szilárd P
> threads, hence a total of 24 threads however even with hyper threading
>>> > enabled there are only 12 threads on your machine. Therefore, only
>>> allocate
>>> > 12. Try
>>> >
>>> > mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v -deffnm md_CaM_test
&g
On Tue, Nov 5, 2013 at 9:55 PM, Dwey Kauffman wrote:
> Hi Timo,
>
> Can you provide a benchmark with "1" Xeon E5-2680 with "1" Nvidia
> k20x GPGPU on the same test of 29420 atoms ?
>
> Are these two GPU cards (within the same node) connected by a SLI (Scalable
> Link Interface) ?
Note that
Let's not hijack James' thread as your hardware is different from his.
On Tue, Nov 5, 2013 at 11:00 PM, Dwey Kauffman wrote:
> Hi Szilard,
>
>Thanks for your suggestions. I am indeed aware of this page. In a 8-core
> AMD with 1GPU, I am very happy about its performance. See below. My
Actual
On Thu, Nov 7, 2013 at 6:34 AM, James Starlight wrote:
> I've gone to conclusion that simulation with 1 or 2 GPU simultaneously gave
> me the same performance
> mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v -deffnm md_CaM_test,
>
> mdrun -ntmpi 2 -ntomp 6 -gpu_id 0 -v -deffnm md_CaM_test,
>
> Doest it b
As Mark said, please share the *entire* log file. Among other
important things, the result of PP-PME tuning is not included above.
However, I suspect that in this case scaling is strongly affected or
by the small size of the system you are simulating.
--
Szilárd
On Sun, Nov 10, 2013 at 5:28 AM,
> If using Tcoupl and Pcoupl = no and then I can compare mdrun x mdrun-gpu,
> being my gpu ~2 times slower than only one core. Well, I definitely don't
> intended to use mdrun-gpu but I am surprised that it performed that bad (OK,
> I am using a low-end GPU, but sander_openmm seems to work fine and
I think this mail belongs to the user's list, CC-d will continue the
discussion there.
--
Szilárd
2010/10/5 Igor Leontyev :
> Dear gmx-developers,
> My first attempt to start GPU-version of gromacs has no success. The reason
> is that grompp turns off setting of electrostatics overriding them b
Hi,
The beta versions are all outdated, could you please use the latest
source distribution (4.5.1) instead (or git from the
release-4-5-patches branch)?
The instructions are here:
http://www.gromacs.org/gpu#Compiling_and_custom_installation_of_GROMACS-GPU
>> The requested platform "CUDA" could n
Dear Igor,
Your output look _very_ weird, it seems as if CMake internal
variable(s) were not initialized, which I have no clue how could have
happened - the build generator works just fine for me. The only thing
I can think of is that maybe your CMakeCache is corrupted.
Could you please rerun cma
Hi,
> Does anyone have an idea about what time the Gmx 4.5.2 will be released?
Soon, if everything goes well in a matter of days.
> And in 4.5.2, would the modified tip5p.itp in charmm27 force field be the
> same as that in current git version?
The git branch release-4-5-patches is the branch
Hi Renato,
First of all, what you're seeing is pretty normal, especially that you
have a CPU that is crossing the border of insane :) Why is it normal?
The PME algorithms are just simply not very well not well suited for
current GPU architectures. With an ill-suited algorithm you won't be
able to
Hi,
If you have installed fftw3 in the standard location it whould work
out of the box. Otherwise, you have to set the LDFLAGS and CPPFLAGS to
the library and include location respectively.
However, there's one more thing I can think of: did you make sure that
you compiled fftw3 in single precisi
You can try the systems we provided on the GROMACS-GPU page:
http://www.gromacs.org/gpu#GPU_Benchmarks
--
Szilárd
On Sat, Nov 6, 2010 at 12:59 AM, lin hen wrote:
> Yeah, I think my problem is the input, but I don't have the .mpd file, I am
> using the existing input which has no problem with
Hi Solomon,
> [100%] Building C object src/kernel/CMakeFiles/mdrun.dir/md_openmm.c.o
> Linking CXX executable mdrun-gpu
> ld: warning: in /usr/local/openmm/lib/libOpenMM.dylib, file was built for
> i386 which is not the architecture being linked (x86_64)
The above linker message clearly states wh
Hi,
If you take a look at the mdp file, it becomes obvious that the
simulation length is infinite:
nsteps = -1
This is useful for a benchmarking setup where you want to run e.g.
~10min case in which you'r use the "-maxh 0.167" mdrun option.
Cheers,
--
Szilárd
On Tue, Nov 23, 201
Hi,
Tesla C1060 and S1070 should is definitely supported so it's strange
that you get that warning. The only thing I can think of is that for
some reason the CUDA runtime reports the name of the GPUS other than
C1060/S1070. Could you please run the deviceQuery from the SDK and
provide the output h
Hi Solomon,
Just stumbled upon your mail and I thought you could still use a
answer to your question.
First of all, as you've probably read on the Gromacs-GPU page, a) you
need a high-performance GPU to achieve good performance (in comparison
to the CPU) -- that's the reason for the strict compat
Hi,
I've never seen/had my hands on the Tesla T10 so I didn't know that's
the name it reports. I'll fix this for the next release. Rest assured
that on this hardware Gromacs-GPU should run just fine.
On the other hand, your driver version is very strange: CUDA Driver
Version = 4243455, while it s
Hi,
Currently there is no concrete plan to implement FEP on GPUs. AFAIK
there is an OpenMM plugin which could be integrated, but I surely
don't have time to work on that and I don't know of anyone else
working on it. Contribution would be welcome, though!
Regards,
--
Szilárd
On Thu, Nov 11, 20
Hi,
Although the question is a bit fuzzy, I might be able to give you a
useful answer.
>From what I see on the whitepaper of the Poweredge m710 baldes, among
other (not so interesting :) OS-es, Dell provides the options of Red
Had or SUSE Linux as factory installed OS-es. If you have any of
these
hip, your max clockrate tends to be lower.
> >As such, its really important to know how your jobs are bound so that
> >you can order a cluster configuration that'll be best for that job.
>
>
> Cheers, Maryam
>
> --- On *Tue, 18/1/11, Szilárd Páll
>
> * wrote
Hi,
There are two things you should test:
a) Does your NVIDIA driver + CUDA setup work? Try to run a different
CUDA-based program, e.g. you can get the CUDA SDK and compile one of
the simple programs like deviceQuery or bandwidthTest.
b) If the above works, try to compile OpenMM from source with
Hi,
You also need the OpenMM libraries and plugins.
For more detail see:
http://www.gromacs.org/gpu#Installing_and_running_Gromacs-GPU
--
Szilárd
On Sat, Jun 26, 2010 at 6:18 PM, Tarsis wrote:
> I'm trying to install cuda but when I export
> LD_LIBRARY_PATH=/usr/local/cuda/lib:$libcudart.so
Hi Chris,
First of all, as Rossen said, the <=2.6.4 is a typo, it was meant to
be >=2.6.4, it _should_ work with 2.8.0 (I took the FindCUDA.cmake
script from the 2.8 cmake sources :), but...
icc is _not_ supported by CUDA, AFAIR some people reported getting it
work but only in some very limited
Hi Chris,
Though I'm repeating myself, for the sake of not leaving this post
unanaswered (btw reposts should be avoided as much as possible!):
First of all, as Rossen said, the <=2.6.4 is a typo, it was meant to
be >=2.6.4, it _should_ work with 2.8.0 (I took the FindCUDA.cmake
script from the 2.
Hi,
Could you provide the compiler versions you used? I really hope it's
not gcc 4.1.x again...
Cheers,
--
Szilárd
On Thu, Aug 5, 2010 at 8:26 PM, Elio Cino wrote:
>
> Since the charmm force field has some instances with large charge groups
> (grompp warns you for it) it is advisable to use a
Hi,
The message is quite obvious about what happened: mdrun received a
TERM signal and therefor it stopped (see: man 7 signal or
http://linux.die.net/man/7/signal).
Figuring out the reason who and why sent a TERM signal to your mdrun
will be your task, but I can think of 2 basic scenarios: (I) yo
Hi Mark,
I've just tried the link you mentioned and it seems to work. Could you
try again?
Cheers,
--
Szilárd
On Tue, Aug 31, 2010 at 9:53 AM, Mark Cheeseman
wrote:
> Hello,
>
> I am trying to download Version 4.0.5 but the FTP server keeps timing out.
> Is there a problem?
>
> Thanks,
> Mark
Hi,
FYI, now building the GPU accelerated version in a non-clean build
tree (used to build the CPU version) should work as well as the other
way around.
_However_, be warned that in the latter case the CPU build-related
parameters do _not_ get reset to their default values (e.g.
GMX_ACCELERATION
Hi,
Indeed, the custom cmake target "install-mdrun" was designed to only
install the mdrun binary and it does not install the libraries this is
linked against when BUILD_SHARED_LIBS=ON.
I'm not completely sure that this is actually a bug, but to me it
smells like one. I'll file a bug report and w
Hi,
> But when use "mdrun -h", the -nt does not exist.
> So can the -nt option be used in mdrun?
Just checked and If you have threads turned on (!) when building
gromacs then on the help page (mdrun -h) -nt does show up! Otherwise,
it's easy to check if you have thread-enabled build or not: just
201 - 300 of 302 matches
Mail list logo