Re: [OMPI users] very bad parallel scaling of vasp using openmpi

2009-08-24 Thread jimkress_58
Gus, You hit the nail on the head. CPMD and VASP are both fine grained parallel Quantum Mechanics Molecular Dynamics Codes. I believe CPMD has implemented the domain decomposition methodology found in gromacs (a classical fine grained molecular dynamics code) which significantly diminishes the s

Re: [OMPI users] very bad parallel scaling of vasp using openmpi

2009-08-18 Thread jimkress_58
Gbit Ethernet is well known to perform poorly for fine grained code like VASP. The latencies for Gbit Ethernet are much too high. If you want good scaling in a cluster for VASP, you'll need to run InfiniBand or some other high speed/ low latency network. Jim -Original Message- From: use

Re: [OMPI users] Open MPI:Problem with 64-bit openMPIandintel compiler

2009-07-24 Thread jimkress_58
You can avoid the "library confusion problem" by building 64 bit and 32 bit version of openMPI in two different directories and then use mpi-selector (on your head and compute nodes) to switch between the two. Just my $0.02 Jim -Original Message- From: users-boun...@open-mpi.org [mailto:

Re: [OMPI users] [Open MPI Announce] Open MPI v1.3.3 released

2009-07-14 Thread jimkress_58
Does use of 1.3.3 require recompilation of applications that were compiled using 1.3.2? Jim -Original Message- From: announce-boun...@open-mpi.org [mailto:announce-boun...@open-mpi.org] On Behalf Of Ralph Castain Sent: Tuesday, July 14, 2009 2:11 PM To: OpenMPI Announce Subject: [Open MPI