Re: [gmx-users] Gromacs with Intel Xeon Phi coprocessors ?

2013-03-12 Thread Szilárd Páll
Hi Chris, You should be able to run on MIC/Xeon Phi as these accelerators, when used in symmetric mode, behave just like a compute node. However, for two main reasons the performance will be quite bad: - no SIMD accelerated kernels for MIC; - no accelerator-specific parallelization implemented (as

Re: [gmx-users] Installing GROMACS4.6.1 on Intel MIC

2013-03-21 Thread Szilárd Páll
FYI: As much as Intel likes to say that you can "just run" MPI/MPI+OpenMP code on MIC, you will probably not be impressed with the performance (it will be *much* slower than a Xeon CPU). If you want to know why and what/when are we doing something about it, please read my earlier comments on MIC p

Re: [gmx-users] cuda gpu status on mdrun

2013-03-21 Thread Szilárd Páll
Hi Quentin, That's just a way of saying that something is wrong with either of the following (in order of possibility of the event): - your GPU driver is too old, hence incompatible with your CUDA version; - your GPU driver installation is broken; - your GPU is behaving in an unexpected/strange ma

Re: [gmx-users] Mismatching number of PP MPI processes and GPUs per node

2013-03-21 Thread Szilárd Páll
FYI: On your machine running OpenMP across two sockets will probably not be very efficient. Depending on the input and at how high paralleliation are you running, you could be better off with running multiple MPI ranks per GPU. This is a bit of an unexplained feature due to it being complicated to

Re: [gmx-users] Mismatching number of PP MPI processes and GPUs per node

2013-03-22 Thread Szilárd Páll
Hi, Actually, if you don't want to run across the network, with those Westmere processors you should be fine with running OpenMP across the two sockets, i.e mdrun -ntomp 24 or to run without HyperThreading (which can be sometimes faster) just use mdrun -ntomp 12 -pin on Now, when it comes to GPU

Re: [gmx-users] no CUDA-capable device is detected

2013-03-28 Thread Szilárd Páll
Hi, If mdrun says that it could not detect GPUs it simply means that the GPU enumeration found no GPUs, otherwise it would have printed what was found. This is rather strange because mdrun uses the same mechanism the deviceQuery SDK example. I really don't have a good idea what could be the issue,

Re: [gmx-users] no CUDA-capable device is detected

2013-03-28 Thread Szilárd Páll
> > > -- > Chandan kumar Choudhury > NCL, Pune > INDIA > > > On Thu, Mar 28, 2013 at 4:26 PM, Chandan Choudhury >wrote: > > > > > On Thu, Mar 28, 2013 at 4:09 PM, Szilárd Páll >wrote: > > > >> Hi, > >> > >> If mdrun s

Re: [gmx-users] About the configuration of Gromacs on multiple nodes with GPU

2013-03-30 Thread Szilárd Páll
Hi, You can certainly use your hardware setup. I assume you've been looking at the log/console output based on which it might seem that mdrun is only using the GPUs in the first (=master) node. However, that is not the case, it's just that the current hardware and launch configuration reporting is

Re: [gmx-users] gmx 4.6 mpi installation through openmpi?

2013-04-05 Thread Szilárd Páll
Hi, As the error message states, the reason for the failed configuration is that CMake can't auto-detect MPI which is needed when you are not providing the MPI compiler wrapper as compiler. If you want to build with MPI you can either let CMake auto-detect MPI and just compile with the C compiler

Re: [gmx-users] GROMACS 4.6v - Myrinet2000

2013-04-08 Thread Szilárd Páll
On Mon, Apr 8, 2013 at 1:37 PM, Justin Lemkul wrote: > On Mon, Apr 8, 2013 at 2:28 AM, Hrachya Astsatryan wrote: > > > Dear all, > > > > We have installed the latest version of Gromacs (version 4.6) on our > > cluster by the following step: > > > > * cmake .. -DGMX_MPI=ON -DCMAKE_INSTALL_PREFIX

Re: [gmx-users] GPU performance

2013-04-09 Thread Szilárd Páll
Hi Ben, That performance is not reasonable at all - neither for CPU only run on your quad-core Sandy Bridge, nor for the CPU+GPU run. For the latter you should be getting more like 50 ns/day or so. What's strange about your run is that the CPU-GPU load balancing is picking a *very* long cut-off w

Re: [gmx-users] GPU performance

2013-04-10 Thread Szilárd Páll
On Wed, Apr 10, 2013 at 3:34 AM, Benjamin Bobay wrote: > Szilárd - > > First, many thanks for the reply. > > Second, I am glad that I am not crazy. > > Ok so based on your suggestions, I think I know what the problem is/was. > There was a sander process running on 1 of the CPUs. Clearly GROMACS

Re: [gmx-users] General conceptual question about advantage of GPUs

2013-04-10 Thread Szilárd Páll
Hi Andrew, As others have said, 40x speedup with GPUs is certainly possible, but more often than not comparisons leading to such numbers are not entirely fair - at least from a computational perspective. The most common case is when people compare legacy, poorly (SIMD)-optimized codes with some ne

Re: [gmx-users] About 4.6.1

2013-04-10 Thread Szilárd Páll
On Wed, Apr 10, 2013 at 4:48 PM, 陈照云 wrote: > I have tested gromacs-4.6.1 with k20. > But when I run the mdrun, I met some problems. > 1.GPU only support float accelerating? > Yes. > 2.Configure options are -DGMX_MPI ,-DGMX_DOUBLE . > But if I run parallely with mpirun, it would get wrong with

Re: [gmx-users] help: load imbalance

2013-04-10 Thread Szilárd Páll
On Wed, Apr 10, 2013 at 4:50 PM, 申昊 wrote: > Hello, >I wanna ask some questions about load imbalance. > 1> Here are the messages resulted from grompp -f md.mdp -p topol.top -c > npt.gro -o md.tpr > >NOTE 1 [file md.mdp]: > The optimal PME mesh load for parallel simulations is below 0.5

Re: [gmx-users] General conceptual question about advantage of GPUs

2013-04-10 Thread Szilárd Páll
On Wed, Apr 10, 2013 at 4:24 PM, Szilárd Páll wrote: > Hi Andrew, > > As others have said, 40x speedup with GPUs is certainly possible, but more > often than not comparisons leading to such numbers are not entirely fair - > at least from a computational perspective. The most comm

Re: [gmx-users] K20 test

2013-04-11 Thread Szilárd Páll
Hi, No, it just means that *your simulation* does not scale. The question is very vague, hence impossible to answer without more details However, assuming that you are not running a, say, 5000 atom system over 6 nodes, the most probable reason is that you have 6 Sandy Bridge nodes with 12-16 core

Re: [gmx-users] cygwin_mpi_gmx installation

2013-04-12 Thread Szilárd Páll
Indeed it's strange. In fact, it seems that CUDA detection did not even run, there should be a message whether it found the toolkit or not just before the "Enabling native GPU acceleration" - and the enabling should not even happen without CUDA detected. Unrelated, but do you really need MPI with

Re: [gmx-users] Re: cygwin_mpi_gmx installation

2013-04-12 Thread Szilárd Páll
On Fri, Apr 12, 2013 at 3:45 PM, 라지브간디 wrote: > Thanks for your answers. I have uninstalled the mpi, have also reinstalled > the CUDA and got the same issue. As you have mentioned before I noticed that > it struggle to detect the CUDA. Do you mean that you reconfigured without MPI and with CUDA

Re: [gmx-users] Re: cygwin_mpi_gmx installation

2013-04-13 Thread Szilárd Páll
On Sat, Apr 13, 2013 at 3:30 PM, Mirco Wahab wrote: > On 12.04.2013 20:20, Szilárd Páll wrote: >> >> On Fri, Apr 12, 2013 at 3:45 PM, 라지브간디 wrote: >>> >>> Can cygwin recognize the CUDA installed in win 7? if so, how do i link >>> them ? >> >&

Re: [gmx-users] Re: cygwin_mpi_gmx installation

2013-04-13 Thread Szilárd Páll
On Sat, Apr 13, 2013 at 5:27 PM, Szilárd Páll wrote: > On Sat, Apr 13, 2013 at 3:30 PM, Mirco Wahab > wrote: >> On 12.04.2013 20:20, Szilárd Páll wrote: >>> >>> On Fri, Apr 12, 2013 at 3:45 PM, 라지브간디 wrote: >>>> >>>> Can cygwin recog

Re: [gmx-users] Building Single and Double Precision in 4.6.1?

2013-04-18 Thread Szilárd Páll
On Thu, Apr 18, 2013 at 6:17 PM, Mike Hanby wrote: > Thanks for the reply, so the next question, after I finish building single > precision non parallel, is there an efficient way to kick off the double > precision build, then the single precision mpi and so on? > > Or do I need to delete everyt

Re: [gmx-users] Error in make install "no valid ELF RPATH". Cray XE6m

2013-04-20 Thread Szilárd Páll
Hi, Your problem will likely be solved by not writing the rpath to the binaries which can be accomplished by setting -DCMAKE_SKIP_RPATH=OFF. This will mean that you will have to make sure that the library path is set for mdrun to work. If that does not fully solve the problem, you might have to b

Re: [gmx-users] GROMACS 4.6 with GPU acceleration (double presion)

2013-04-22 Thread Szilárd Páll
On Tue, Apr 9, 2013 at 6:52 PM, David van der Spoel wrote: > On 2013-04-09 18:06, Mikhail Stukan wrote: > >> Dear experts, >> >> I have the following question. I am trying to compile GROMACS 4.6.1 with >> GPU acceleration and have the following diagnostics: >> >> # cmake .. -DGMX_DOUBLE=ON -DGMX_B

Re: [gmx-users] GROMACS 4.6 with GPU acceleration (double

2013-04-22 Thread Szilárd Páll
On Mon, Apr 22, 2013 at 8:49 AM, Albert wrote: > On 04/22/2013 08:40 AM, Mikhail Stukan wrote: >> >> Could you explain which hardware do you mean? As far as I know, K20X >> supports double precision, so I would assume that double precision GROMACS >> should be realizable on it. > > > Really? But m

Re: [gmx-users] How to use multiple nodes, each with 2 CPUs and 3 GPUs

2013-04-25 Thread Szilárd Páll
Hi, You should really check out the documentation on how to use mdrun 4.6: http://www.gromacs.org/Documentation/Acceleration_and_parallelization#Running_simulations Brief summary: when running on GPUs every domain is assigned to a set of CPU cores and a GPU, hence you need to start as many PP MPI

Re: [gmx-users] compile error

2013-04-26 Thread Szilárd Páll
You got a warning at configure-time that the nvcc host compiler can't be set because the mpi compiler wrapper are used. Because of this, nvcc is using gcc to compile CPU code whick chokes on the icc flags. You can: - set CUDA_HOST_COMPILER to the mpicc backend, i.e. icc or - let cmake detect MPI an

Re: [gmx-users] Re: Illegal instruction (core dumped) - trjconv

2013-04-29 Thread Szilárd Páll
This error means that your binaries contain machine instructions that the processor you run them on does not support. The most probable cause is that you compiled the binaries on a machine with different architecture than the one you are running on. Cheers, -- Szilárd On Mon, Apr 29, 2013 at 11

Re: [gmx-users] GPU job often stopped

2013-04-29 Thread Szilárd Páll
Have you tried running on CPUs only just to see if the issue persists? Unless the issue does not occur with the same binary on the same hardware running on CPUs only, I doubt it's a problem in the code. Do you have ECC on? -- Szilárd On Sun, Apr 28, 2013 at 5:27 PM, Albert wrote: > Dear: > >

Re: [gmx-users] GPU job often stopped

2013-04-29 Thread Szilárd Páll
On Mon, Apr 29, 2013 at 2:41 PM, Albert wrote: > On 04/28/2013 05:45 PM, Justin Lemkul wrote: >> >> >> Frequent failures suggest instability in the simulated system. Check your >> .log file or stderr for informative Gromacs diagnostic information. >> >> -Justin > > > > my log file didn't have any

Re: [gmx-users] GPU job often stopped

2013-04-29 Thread Szilárd Páll
e GPU while mdrun was running? Cheers, -- Szilárd On Mon, Apr 29, 2013 at 3:32 PM, Albert wrote: > On 04/29/2013 03:31 PM, Szilárd Páll wrote: >> >> The segv indicates that mdrun crashed and not that the machine was >> restarted. The GPU detection output (both on stderr and l

Re: [gmx-users] GPU job often stopped

2013-04-29 Thread Szilárd Páll
On Mon, Apr 29, 2013 at 3:51 PM, Albert wrote: > On 04/29/2013 03:47 PM, Szilárd Páll wrote: >> >> In that case, while it isn't very likely, the issue could be caused by >> some implementation detail which aims to avoid performance loss caused >> by an issue

Re: [gmx-users] cudaStreamSynchronize failed

2013-05-10 Thread Szilárd Páll
Hi, Such an issue typically indicates a GPU kernel crash. This can be caused by a large variety of factors from program bug to GPU hardware problem. To do a simple check for the former please run with the CUDA memory checker, e.g: /usr/local/cuda/bin/cuda-memcheck mdrun [...] Additionally, as you

Re: [gmx-users] Performance (GMX4.6.1): MPI vs Threads

2013-05-16 Thread Szilárd Páll
I'm not sure what you mean by "threads". In GROMACS this can refer to either thread-MPI or OpenMP multi-threading. To run within a single compute node a default GROMACS installation using either of the two aforementioned parallelization methods (or a combination of the two) can be used. -- Szilárd

Re: [gmx-users] Performance (GMX4.6.1): MPI vs Threads

2013-05-16 Thread Szilárd Páll
PS: if your compute-nodes are Intel of some recent architecture OpenMP-only parallelization can be considerably more efficient. For more details see http://www.gromacs.org/Documentation/Acceleration_and_parallelization -- Szilárd On Thu, May 16, 2013 at 7:26 PM, Szilárd Páll wrote: > I&#x

Re: [gmx-users] Comparing Gromacs versions

2013-05-17 Thread Szilárd Páll
The answer is in the log files, in particular the performance summary should indicate where is the performance difference. If you post your log files somewhere we can probably give further tips on optimizing your run configurations. Note that with such a small system the scaling with the group sch

Re: [gmx-users] Comparing Gromacs versions

2013-05-17 Thread Szilárd Páll
On Fri, May 17, 2013 at 2:48 PM, Djurre de Jong-Bruinink wrote: > > >>The answer is in the log files, in particular the performance summary >>should indicate where is the performance difference. If you post your >>log files somewhere we can probably give further tips on optimizing >>your run confi

Re: [gmx-users] compile Gromacs using Cray compilers

2013-05-20 Thread Szilárd Páll
The thread-MPI library provides the thread affinity setting functionality to mdrun, hence certain parts of it will always be compiled in, even with GMX_MPI=ON. Apparently, the Cray compiler does not like some of the thread-MPI headers. Feel free to file a bug report on redmine.gromacs.org, but *don

Re: [gmx-users] Re: Have your ever got a real NVE simulation (good energy conservation) in gromacs?

2013-05-25 Thread Szilárd Páll
With the verlet cutoff scheme (new in 4.6) you get much better control over the drift caused by (missed) short range interactions; you just set a maximum allowed target drift and the buffer will be calculated accordingly. Additionally, with the verlet scheme you are free to tweak the neighbor searc

Re: [gmx-users] About Compilation error in gromacs 4.6

2013-05-28 Thread Szilárd Páll
10.04 comes with gcc 4.3 and 4.4 which should both work (we even test them with Jenkins). Still, you should really get a newer gcc, especially if you have an 8-core AMD CPU (=> either Bulldozer or Piledriver) both of which are fully supported only by gcc 4.7 and later. Additionally, AFAIK the 2.6.

Re: Re: [gmx-users] GPU-based workstation

2013-05-28 Thread Szilárd Páll
Dear all, As far as I understand, the OP is interested in hardware for *running* GROMACS 4.6 rather than developing code. or running LINPACK. To get best performance it is important to use a machine with hardware balanced for GROMACS' workloads. Too little GPU resources will result in CPU idling

Re: Aw: Re: [gmx-users] GPU-based workstation

2013-05-28 Thread Szilárd Páll
On Sat, May 25, 2013 at 2:16 PM, Broadbent, Richard wrote: > I've been running on my Universities GPU nodes these are one E5-xeon (6-cores > 12 threads) and have 4 Nvidia 690gtx's. My system is 93 000 atoms of DMF > under NVE. The performance has been a little disappointing That sounds like a

Re: [gmx-users] Re: GPU-based workstation

2013-05-28 Thread Szilárd Páll
On Tue, May 28, 2013 at 10:14 AM, James Starlight wrote: > I've found GTX Titat with 6gb of RAM and 384 bit. The price of such card is > equal to the price of the latest TESLA cards. Nope! Titan: $1000 Tesla K10: $2750 Tesla K20(c): $3000 TITAN is cheaper than any Tesla and the fastest of all N

Re: [gmx-users] gmx 4.6.2 segementation fault (core dump)

2013-06-03 Thread Szilárd Páll
Thanks for reporting this. he best would be a redmine bug with a tpr, command line invocation for reproduction as well log output to see what software and hardware configuration are you using. Cheers, -- Szilárd On Mon, Jun 3, 2013 at 2:46 PM, Johannes Wagner wrote: > Hi there, > trying to set

Re: [gmx-users] gmx 4.6.2 segementation fault (core dump)

2013-06-03 Thread Szilárd Páll
gner > PhD Student, MBM Group > > Klaus Tschira Lab (KTL) > Max Planck Partner Institut for Computational Biology (PICB) > 320 YueYang Road > 200031 Shanghai, China > > phone: +86-21-54920475 > email: johan...@picb.ac.cn > > and > > Heidelberg Institut for Theore

Re: [gmx-users] GPU

2012-06-13 Thread Szilárd Páll
On Wed, Jun 13, 2012 at 3:59 AM, Mark Abraham wrote: > On 12/06/2012 10:49 PM, Ehud Schreiber wrote: > >> Message: 4 >>> Date: Mon, 11 Jun 2012 15:54:39 +1000 >>> From: Mark Abraham> >>> Subject: Re: [gmx-users] GPU >>> To: Discussion list for GROMACS users >>> Message-ID:<4FD5881F.3040509@**anu.e

Re: [gmx-users] Looking for GPU benchmarks

2012-08-21 Thread Szilárd Páll
Hi, The short answer is that you need to turn on the new verlet cut-off scheme. You read the following wiki pages: http://www.gromacs.org/Documentation/Acceleration_and_parallelization?highlight=verlet#GPU_acceleration http://www.gromacs.org/Documentation/Cut-off_schemes?highlight=verlet#How_to_u

Re: [gmx-users] Compilation of Gromacs 4.5.5 with GPU support: libxml and CUDA toolkit problems

2012-08-21 Thread Szilárd Páll
On Mon, Aug 6, 2012 at 4:04 PM, ms wrote: > Hi, > > I am trying to compile Gromacs 4.5.5 with GPU support on Linux. I have > performed the following steps: > > export OPENMM_ROOT_DIR=/home//gromacs/OpenMM2.0-Linux64/ > mkdir build-gpu > mkdir exec-gpu > cd build-gpu > cmake ../ -DGMX_OPENMM=O

Re: [gmx-users] Problem with Gromacs installation with GPU.

2012-08-21 Thread Szilárd Páll
Dear Jesmin, On Tue, Aug 21, 2012 at 7:21 PM, jesmin jahan wrote: > Dear All, > > I have installed gromacs 4.5.3 on a cluster. I downloaded the > gromacs-4.5-GPU-beta2-X86_64 binaries and followed the following > instructions: Those binaries are extremely outdated. Please compile Gromacs form t

Re: [gmx-users] Re: Looking for GPU benchmarks

2012-08-27 Thread Szilárd Páll
Which system did you run? What settings? A few tips: - Use CUDA 4.2 (5.0 on Kepler); - Have at least 10-20k atoms/GPU (and more to get peak GPU performance); - Use the shortest cut-off possible to allow CPU-GPU load balancing; - Due to initial domain-decomposition/parallelization overhead, scaling

Re: [gmx-users] Re: Looking for GPU benchmarks

2012-08-28 Thread Szilárd Páll
On Tue, Aug 28, 2012 at 8:59 AM, Mathieu38 wrote: > OK. Which version / branch / revision should I use then ? Latest git version from the nbnxn_hybrid_acc. -- Szilárd > Thanks. > > > > -- > View this message in context: > http://gromacs.5086.n6.nabble.com/Looking-for-GPU-benchmarks-tp5000377p5

Re: [gmx-users] RE: Looking for GPU benchmarks

2012-08-28 Thread Szilárd Páll
f An Open World > > http://www.bull.com > > > > P Pensez à l’environnement avant d’imprimer / Before printing, think > about the environment. > > > > > > > > De : Szilárd Páll [via GROMACS] > [mailto:Szilárd Páll [via GROMACS] > ] > Envoyé : lundi 27

Re: [gmx-users] mdrun_gpu on NVidia Quadro 2000

2012-08-28 Thread Szilárd Páll
On Tue, Aug 28, 2012 at 1:30 AM, Mauricio Carrillo Tripp wrote: > Hi, > > I was wondering if the numbers I get are reasonable while running the DHFR > benchmark at: > http://www.gromacs.org/Documentation/Installation_Instructions/GPUs > > As you can see, CPU performance is quite good, but GPU perf

Re: [gmx-users] 4 question

2012-08-28 Thread Szilárd Páll
On Fri, Aug 17, 2012 at 7:19 PM, Hossein Lanjanian wrote: > Hi > > we are new academic users of GROMACS. we installed gromacs 4.5.5 and > tried to learn the job by using tutorials found in the "gromacs.org" > web site. There is one question: > we successfully ran the "1PGB.pdb". > we know that *

Re: [gmx-users] Problem with OMP_NUM_THREADS=12 mpirun -np 16 mdrun_mpi

2012-08-29 Thread Szilárd Páll
On Wed, Aug 29, 2012 at 5:32 AM, jesmin jahan wrote: > Dear All, > > I have installed gromacs VERSION 4.6-dev-20120820-87e5bcf with > -DGMX_MPI=ON . I am assuming as OPENMP is default, it will be > automatically installed. > > My Compiler is > /opt/apps/intel11_1/mvapich2/1.6/bin/mpicc Intel icc (

Re: [gmx-users] Problem with OMP_NUM_THREADS=12 mpirun -np 16 mdrun_mpi

2012-08-30 Thread Szilárd Páll
>> >> Is it the case that later version of 4.6 has this feature? >> >> Please let me know. >> >> Thanks, >> Jesmin >> >> On Wed, Aug 29, 2012 at 4:27 AM, Szilárd Páll >> wrote: >> > On Wed, Aug 29, 2012 at 5:32 AM, jesmin ja

Re: [gmx-users] A favor question: experience running Gromacs in the cloud

2012-10-19 Thread Szilárd Páll
Hi, When it comes to hardware, pretty much the only thing that matters is processors (CPUs and maybe GPUs) , memory will always be more than enough -- unless you plan to run massive analysis in the cloud. In general, if your simulation system runs fast enough on a single compute node (1-2 CPUs), t

Re: [gmx-users] GPU warnings

2012-11-05 Thread Szilárd Páll
The first warning indicates that you are starting more threads than the hardware supports which would explain the poor performance. Could share a log file of the suspiciously slow run as well as the command line you used to start mdrun? Cheers, -- Szilárd On Sun, Nov 4, 2012 at 5:32 PM, Albert

Re: [gmx-users] why not double precision?

2012-11-05 Thread Szilárd Páll
On Mon, Oct 22, 2012 at 9:32 AM, Albert wrote: > hello: > > I found that the GPU version doesn't support double precison. > > double precision build > > $ cmake -DGMX_DOUBLE=ON ../gromacs-src > > Note that GPU acceleration is not compatible with double precision builds. > > > I am just wondering

Re: [gmx-users] Running Gromacs in Clusters

2012-11-08 Thread Szilárd Páll
Hi, With a fast network like Cray's you can easily get to 400-500 atoms/core core with 4.5 (that's 400+ cores for your system), perhaps even further. With 4.6 this improves quite a bit (up to 2-3x). -- Szilárd On Wed, Nov 7, 2012 at 5:19 PM, Erik Marklund wrote: > Hi, > > Sure you can go beyo

Re: [gmx-users] GPU warnings

2012-11-09 Thread Szilárd Páll
ecause Hyper-Threading with OpenMP doesn't always help. > > I have also attached my log file (from "mdrun_intel_cuda5 -v -s topol.tpr > -testverlet") in case you find it helpful. > I don't see it attached. -- Szilárd > > Thanks, > Thomas > > >

Re: [gmx-users] GPU warnings

2012-11-09 Thread Szilárd Páll
Hi, You must have an odd sysconf version! Could you please check what is the sysconf system variable's name in the sysconf man page (man sysconf) where it says something like: _SC_NPROCESSORS_ONLN The number of processors currently online. The first line should be one of the fol

Re: [gmx-users] problem with MPI

2012-11-11 Thread Szilárd Páll
FYI: If you are planning to run single-node, you don't need MPI. Just compile with default settings and you'll get a thread-parallel version that supports MPI parallelization within mdrun based on multi-threading (using the library thread-MPI). In practice his means that: $ mdrun -nt 2 will use two

Re: [gmx-users] Gromacs 4.6 segmentation fault with mdrun

2012-11-12 Thread Szilárd Páll
Hi Sebastian, That is very likely a bug so I'd appreciate if you could provide a bit more information, like: - OS, compiler - results of runs with the following configurations: - "mdrun -nb cpu" (to run CPU-only with Verlet scheme) - "GMX_EMULATE_GPU=1 mdrun -nb gpu" (to run GPU emulation usi

Re: [gmx-users] GPU warnings

2012-11-14 Thread Szilárd Páll
headers. Thanks, -- Szilárd On Sat, Nov 10, 2012 at 5:24 PM, Thomas Evangelidis wrote: > > > On 10 November 2012 03:21, Szilárd Páll wrote: > >> Hi, >> >> You must have an odd sysconf version! Could you please check what is the >> sysconf system variable

Re: [gmx-users] GPU warnings

2012-11-16 Thread Szilárd Páll
vailable -66 logical CPU cores with 1 > thread-MPI threads. > > This will cause considerable performance loss! > > I have also attached the md.log file. > > thanks, > Thomas > > > > On 14 November 2012 19:48, Szilárd Páll wrote: > >> H

Re: [gmx-users] GPU warnings

2012-11-16 Thread Szilárd Páll
Hi Albert, Apologies for hijacking your thread. Do you happen to have Fedora 17 as well? -- Szilárd On Sun, Nov 4, 2012 at 10:55 AM, Albert wrote: > hello: > > I am running Gromacs 4.6 GPU on a workstation with two GTX 660 Ti (2 x > 1344 CUDA cores), and I got the following warnings: > > tha

Re: [gmx-users] GPU warnings

2012-11-19 Thread Szilárd Páll
v 16, 2012 at 4:31 PM, Szilárd Páll wrote: > Hi Albert, > > Apologies for hijacking your thread. Do you happen to have Fedora 17 as > well? > > -- > Szilárd > > > > On Sun, Nov 4, 2012 at 10:55 AM, Albert wrote: > >> hello: >> >> I am running

Re: [gmx-users] GPU warnings

2012-11-19 Thread Szilárd Páll
ortran.x86_64 4.7.2-2.fc17 > @updates > libgcc.i686 4.7.2-2.fc17 > @updates > libgcc.x86_64 4.7.2-2.fc17 @updates > > > Thomas > > > > On 19 November 2012 16:57, Szilárd Páll wrote: > > &

Re: [gmx-users] GPU warnings

2012-11-21 Thread Szilárd Páll
On Mon, Nov 19, 2012 at 6:25 PM, Szilárd Páll wrote: > On Mon, Nov 19, 2012 at 4:09 PM, Thomas Evangelidis wrote: > >> Hi Szilárd, >> >> I compiled with the Intel compilers, not gcc. In case I am missing >> something, these are the versions I have: >> > >

Re: [gmx-users] Gromacs 4.6 segmentation fault with mdrun

2012-11-21 Thread Szilárd Páll
.: 2.0, ECC: no, stat: > > compatible > > #1: NVIDIA GeForce GTX 580, compute cap.: 2.0, ECC: no, stat: > > compatible > > > > > > Back Off! I just backed up ctab14.xvg to ./#ctab14.xvg.2# > > > > Back Off! I just backed up dtab14.xvg to ./#dtab14

Re: [gmx-users] strange lincs warning with version 4.6

2012-11-23 Thread Szilárd Páll
Hi, On Fri, Nov 23, 2012 at 9:40 AM, sebastian < sebastian.wa...@physik.uni-freiburg.de> wrote: > Dear GROMCS user, > > I installed the git gromacs VERSION 4.6-dev-20121117-7a330e6-dirty on my > local desktop Watch out, the dirty version suffix means you have changed something in the source.

Re: [gmx-users] GPU warnings

2012-11-26 Thread Szilárd Páll
On Sun, Nov 25, 2012 at 8:47 PM, Thomas Evangelidis wrote: > Hi Szilárd, > > I was able to run code compiled with icc 13 on Fedora 17, but as I don't > > have Intel Compiler v13 on this machine I can't check it now. > > > > Please check if it works for you with gcc 4.7.2 (which is the default) > a

Re: [gmx-users] Is any command to distinguish gromacs 4.5.4 and 4.5.5

2012-11-26 Thread Szilárd Páll
Or run any binary with the -version option e.g: $ mdrun -version -- Szilárd On Sun, Nov 25, 2012 at 10:30 AM, Mark Abraham wrote: > Look at the top of the output of any GROMACS program. > > Mark > > On Sun, Nov 25, 2012 at 10:06 AM, Acoot Brett > wrote: > > > Dear All, > > > > Is any gromacs c

Re: [gmx-users] Gromacs 4.6 segmentation fault with mdrun

2012-11-28 Thread Szilárd Páll
Dear Makoto Yoneya, Thank you for the feedback, it is of great help! I will try to reproduce the issue because mdrun should not segfault with any gcc version 4.3 and above. Could you please provide two more things: - a log file of the failed run using the latest code from git; - run mdrun with th

Re: [gmx-users] Does GPU support ATI card?

2012-11-29 Thread Szilárd Páll
No. -- Szilárd On Thu, Nov 29, 2012 at 5:15 PM, Albert wrote: > hello: > > I am just wondering does Gromacs GPU accerlation suppport ATI card? > > THX > > Albert > -- > gmx-users mailing listgmx-users@gromacs.org > http://lists.gromacs.org/**mailman/listinfo/gmx-users

Re: [gmx-users] Does GPU support ATI card?

2012-11-29 Thread Szilárd Páll
GPU cards. > > > I've got ATI-Radeon HD5770, I tried both openMM and withou openMM 4.6, it > always claimed "couldn't find CUDA device....." > > What's happening? > > > > > On 11/29/2012 05:44 PM, Szilárd Páll wrote: > >> No. >>

Re: [gmx-users] Does GPU support ATI card?

2012-11-29 Thread Szilárd Páll
> On 11/29/12 12:51 PM, Albert wrote: > >> On 11/29/2012 06:44 PM, Szilárd Páll wrote: >> >>> Hi Albert, >>> >>> That claim is false. The current mdrun-opnemm version only supports the >>> CUDA OpenMM plugin and even that is not fully supported a

Re: [gmx-users] Build problem with 4.6beta1

2012-11-29 Thread Szilárd Páll
On Fri, Nov 30, 2012 at 3:20 AM, Justin Lemkul wrote: > > Hooray for being the first to report a problem with the beta :) > > We have a cluster at our university that provides us with access to some > CPU-only nodes and some CPU-GPU nodes. I'm having problems with getting > 4.6beta1 to build, an

Re: [gmx-users] Build on OSX with 4.6beta1

2012-12-03 Thread Szilárd Páll
Hi, I think this happens either because you have cmake 2.8.10 and the host-compiler gets double-set or because something gets messed up when you use clang/clang++ with gcc as the CUDA host-compiler. Could you provide the exact error output you are getting as well as cmake invocation? As I don't ha

Re: [gmx-users] Build on OSX with 4.6beta1

2012-12-03 Thread Szilárd Páll
Hi, Preferably we should avoid starting an OS flame-war here. However, before this thread turns into an ode to Apple, I have to say: Mac OS X is/can be a pain both for development and computational use. As long as you use it to code for iPhone or write some Mac app in Xcode, it's probably excellen

Re: [gmx-users] strange lincs warning with version 4.6

2012-12-04 Thread Szilárd Páll
djusted). -- Szilárd > > Mark > > On Tue, Dec 4, 2012 at 3:41 PM, sebastian < > sebastian.wa...@physik.uni-freiburg.de> wrote: > > > On 11/23/2012 08:29 PM, Szilárd Páll wrote: > > > >> Hi, > >> > >> On Fri, Nov 23, 2012

Re: [gmx-users] Re: Build on OSX with 4.6beta1

2012-12-04 Thread Szilárd Páll
Hi Carlo, I think I know now what the problem is. The CUDA compiler, nvcc, uses a the host compiler to compile CPU code which is generated as C++ and therefore, a C++ compiler is needed. However, up until CUDA 5.0 nvcc did not recognize the Intel C++ compiler (icpc) and it would only accept icc as

Re: [gmx-users] GPU warnings

2012-12-11 Thread Szilárd Páll
Hi Thomas, It looks like some gcc 4.7-s don't work with CUDA, although I've been using various Ubuntu/Linaro versions, most recently 4.7.2 and had no issues whatsoever. Some people seem to have bumped into the same problem (see http://goo.gl/1onBz or http://goo.gl/JEnuk) and the suggested fix is t

Re: [gmx-users] GPU warnings

2012-12-11 Thread Szilárd Páll
On Tue, Dec 11, 2012 at 6:49 PM, Mirco Wahab < mirco.wa...@chemie.tu-freiberg.de> wrote: > Am 11.12.2012 16:04, schrieb Szilárd Páll: > > It looks like some gcc 4.7-s don't work with CUDA, although I've been >> using >> various Ubuntu/Linaro versions, mos

Re: [gmx-users] Gromacs compilation on AMD multicore

2011-07-05 Thread Szilárd Páll
Additionally, if you care about a few percent extra performance, you should use gcc 4.5 or 4.6 for compiling Gromacs as well as FFTW (unless you have a bleeding-edge OS which was built with any of these latest gcc versions). While you might not see a lot of improvement in mdrun performance (wrt gcc

Re: [gmx-users] Installation of gromacs-gpu on windows

2011-07-07 Thread Szilárd Páll
how-to about step-by-step compiling of gpu-accelerated gromacs under > windows, because now i'm totally confused... Thanks in advance! > > 2011/6/30 Szilárd Páll : >> Dear Andrew, >> >> Compiling on Windows was tested only using MSVC and I have no idea if >> it works

Re: [gmx-users] Installation of gromacs-gpu on windows

2011-07-08 Thread Szilárd Páll
I think you made the right decision! :) -- Szilárd On Fri, Jul 8, 2011 at 12:50 PM, Андрей Гончар wrote: > Thanks a lot! > Now we decided to use gromacs under linux and the installation of > gromacs and gromacs-gpu has passed without errors > Problem is solved :) > > 201

[gmx-users] Re: [gmx-developers] GPU

2011-07-12 Thread Szilárd Páll
Hi, As this is not a development-related question I'm moving the discussion to the user's list. Future replies should be sent *only* to gmx-users@gromacs.org. As Axel pointed out, the list of CUDA-compatible devices is much broader than the list of cards we label compatible. The compatibility che

Re: [gmx-users] RE: Gromacs on GPU: GTX or Tesla?

2011-08-04 Thread Szilárd Páll
Hi, Tesla cards won't give you much benefit when it comes to running the current Gromacs. Additionally, I can tell you so much that this won't change in the future either. The only advantage of the C20x0-s is ECC and double precision - which is ATM anyway not supported in Gromacs on GPUs. Gromacs

Re: [gmx-users] GROMACS with GPU

2011-08-18 Thread Szilárd Páll
Hi, Gromacs 4.5.x works only with OpenMM 2.x. You might be able to use CUDA 4.0, but you will probably have to recompile OpenMM from source. Cheers, -- Szilárd On Fri, Aug 19, 2011 at 2:23 AM, Park, Jae Hyun nmn wrote: > Dear GMX users, > > > > I am installing GMX 4.5.3 with GPU. > > But, the

Re: [gmx-users] more than 100% CPU

2011-08-18 Thread Szilárd Páll
It is true that on Intel CPUs with HT supported and on you get an up to 10-15% speedup if you also use all virtual cores wrt to running only as many threads as real cores. Additionally, as the OS reports all virtual processors, by Gromacs will use all of them by default, i.e. will run with 8 thread

Re: [gmx-users] building GROMACS 4.5.4 on Power6 with CMAKE

2011-09-14 Thread Szilárd Páll
Hi, I have not followed the entire discussion so I might be completely wrong, I might be fill in some gaps. > Firstly, including config.h inside the fortran .F kernel files for power6 is > causing problems with > their parsing using xlf. adding -WF,-qfpp didn't help. Had to provide a > modified x

Re: [gmx-users] RE:RE: gromacs installation

2011-10-11 Thread Szilárd Páll
Hi, Based on the message it seem that autom4te (part of the autoconf tools) can't write some temporary file to the standard temp location /tmp. That would quite strange as if the temp directory is not there $TMPDIR should be defined, but I suspect it's not, otherwise autom4te would have picked it

Re: [gmx-users] Gromacs: Cloud Vs. Boinc Server?

2011-10-11 Thread Szilárd Páll
Hi Gregory, I am not very familiar with the could computing offerings, but as far as I know, in general they are not a very cheap solution when it comes to relatively low usage (non massive enterprise use). If you would need it only for you own research, you might be better off with applying for

Re: [gmx-users] Gromacs: Cloud Vs. Boinc Server?

2011-10-12 Thread Szilárd Páll
Dear Stephan, > Radeons work as well.  You can put a 3-4 GPU board together with the highest > end AMD or Intel chip for 3K, plus 16G RAM if you look around for a day or > two, but the cooling is the main problem (with 1/4 the price radeons Vs. GTI > cards), so one has to take cooling into acco

Re: [gmx-users] Gromacs: Cloud Vs. Boinc Server?

2011-10-12 Thread Szilárd Páll
erformance point of view the 570 way better and depending on the use case even a 560 can be a decent and cheap option. -- Szilárd > On Wed, Oct 12, 2011 at 9:54 AM, Szilárd Páll wrote: >> Dear Stephan, >> >>> Radeons work as well.  You can put a 3-4 GPU board together w

Re: [gmx-users] using gromacs with an specific GCC

2011-10-12 Thread Szilárd Páll
Hi Nathalia, Right, gcc 4.1 is quite controversial as these is bug in it which is though to be causing mdrun crashes. So you better stay away from 4.1 as well as from other old gcc versions. I'd recommend 4.5 or 4.6 as these have gotten really good, even compared to icc - at least when it comes to

Re: [gmx-users] Link to Intel MKL (fftw) via cmake options

2011-10-17 Thread Szilárd Páll
> --- [CMakeCache.txt] - > > ... > > //Flags used by the compiler during all build types > CMAKE_CXX_FLAGS:STRING=' -msse2 -ip -funroll-all-loops -std=gnu99  ' > > //Flags used by the compiler during release builds. > CMAKE_CXX_FLAGS_RELEASE:STRING=-mtune

Re: [gmx-users] failure during compiling gromacs 4.5.5

2011-10-24 Thread Szilárd Páll
Hi, The error messages are all referring to SSE 4.1 packed integer min/max operations not being recognized. I assume that these were enabled by the "-xHOST" compiler option, and icc automatically generated these instructions - the files it's complaining about are even temporary files. Could it be

<    1   2   3   4   >