Lenny Verkhovsky wrote:
2. if you are using Open MPI 1.3 you dont have to
use mpi_leave_pinned 1 , since it's a default value
And if you're using "-mca btl self,sm" on a single node, I think
mpi_leave_pinned is immaterial (since it's for openib).
On Thu, Jul 2, 2009 at 4:47 PM, Swamy K
Swamy Kandadai wrote:
Jeff:
I'm not Jeff, but...
Linpack has different characteristics at different problem sizes. At
small problem sizes, any number of different overheads could be the
problem. At large problem sizes, one should approach the peak
floating-point performance of the machin
Hi,
I am not an HPL expert, but this might help.
1. rankfile mapper is avaliale only from Open MPI 1.3 version, if you are
using Open MPI 1.2.8 try -mca mpi_paffinity_alone 1
2. if you are using Open MPI 1.3 you dont have to use mpi_leave_pinned 1 ,
since it's a default value
Lenny.
On Thu,
Jeff:
I am running on a 2.66 GHz Nehalem node. On this node, the turbo mode and
hyperthreading are enabled.
When I run LINPACK with Intel MPI, I get 82.68 GFlops without much
trouble.
When I ran with OpenMPI (I have OpenMPI 1.2.8 but my colleague was using
1.3.2). I was using the same MKL libr