On Oct 15, 2008, at 9:35 AM, Francesco Iannone wrote:
I have a cluster of 16 nodes DualCPU DualCore AMD RAM 16 GB with
InfiniBand
CISCO HCA and switch InfiniBand.
It uses Linux RH Enterprise 4 64 bit , OpenMPI 1.2.7, PGI 7.1-4 and
openib-1.2-7.
Hence it means that the option —disable-ptmall
For MPICH2 1.0.7, configure with --with-device=ch3:nemesis. That will use
shared memory within a node unlike ch3:sock which uses TCP. Nemesis is the
default in 1.1a1.
Rajeev
> Date: Wed, 15 Oct 2008 18:21:17 +0530
> From: "Sangamesh B"
> Subject: Re: [OMPI users] Performance: MPICH2 vs OpenMPI
Hello Jeff
First of all About your replay:
³ I'm surprised that we have not yet put this info in our FAQ -- I'll
make a note to do so... ³
I sent support at Mailing list on date: 2008-08-21 12:43:40
Subject: [OMPI users] Memory allocation with PGI compiler
About ptmalloc:
I have a cluster of 1
On Fri, Oct 10, 2008 at 10:40 PM, Brian Dobbins wrote:
>
> Hi guys,
>
> On Fri, Oct 10, 2008 at 12:57 PM, Brock Palen wrote:
>
>> Actually I had a much differnt results,
>>
>> gromacs-3.3.1 one node dual core dual socket opt2218 openmpi-1.2.7
>> pgi/7.2
>> mpich2 gcc
>>
>
>For some reason