Gus,
You hit the nail on the head. CPMD and VASP are both fine grained parallel
Quantum Mechanics Molecular Dynamics Codes. I believe CPMD has implemented
the domain decomposition methodology found in gromacs (a classical fine
grained molecular dynamics code) which significantly diminishes the s
Gbit Ethernet is well known to perform poorly for fine grained code like
VASP. The latencies for Gbit Ethernet are much too high.
If you want good scaling in a cluster for VASP, you'll need to run
InfiniBand or some other high speed/ low latency network.
Jim
-Original Message-
From: use
You can avoid the "library confusion problem" by building 64 bit and 32 bit
version of openMPI in two different directories and then use mpi-selector
(on your head and compute nodes) to switch between the two.
Just my $0.02
Jim
-Original Message-
From: users-boun...@open-mpi.org [mailto:
Does use of 1.3.3 require recompilation of applications that were compiled
using 1.3.2?
Jim
-Original Message-
From: announce-boun...@open-mpi.org [mailto:announce-boun...@open-mpi.org]
On Behalf Of Ralph Castain
Sent: Tuesday, July 14, 2009 2:11 PM
To: OpenMPI Announce
Subject: [Open MPI