But, exactly the same program gets different result in another cluster.
I mean the result doent have any spike at all.
Second cluster is almost the same features with the previous one except little 
small memory capacity and little low frequency.
First cluster: 3.0 GHz Intel Xeon, 4GB memory, centOS 4.6, 
Second cluster: 2.8 GHz Intel Xeon, 3GBmemory, Fedora core5
Openmpi1.3 is used in both cluster.






________________________________
From: Eugene Loh <eugene....@sun.com>
To: Open MPI Users <us...@open-mpi.org>
Sent: Friday, April 24, 2009 1:26:14 AM
Subject: Re: [OMPI users] MPI_Bcast from OpenMPI

Okay.  So, going back to Jeff's second surprise, we have 256 Mbyte/2.5
sec = 100 Mbyte/sec = 1 Gbit/sec (sloppy math).  So, without getting
into details of what we're measuring/reporting here, there doesn't on
the face of it appear to be anything wrong with the baseline
performance.  Jeff was right that 256K doubles should have been faster,
but 256 Mbyte... seems reasonable.

So, the remaining mystery is the 6x or so spike at 128 Mbyte.  Dunno. 
How important is it to resolve that mystery?

shan axida wrote: 
Sorry, I had a mistake in calculation.
Not 131072 (double) but 131072 KB.
It means around 128 MB.
 
From: Jeff Squyres <jsquy...@cisco.com>
To: Open MPI Users <us...@open-mpi.org>
Sent: Thursday, April 23, 2009 8:23:52 PM
Subject: Re: [OMPI users] MPI_Bcast from OpenMPI


Very strange; 6 seconds for a 1MB broadcast over 64 processes is *way*
too long.  Even 2.5 sec at 2MB seems too long


      

Reply via email to