Yes it is a Westmere system. 

Socket L#0 (P#0 CPUModel="Intel(R) Xeon(R) CPU E7- 8870  @ 2.40GHz" 
CPUType=x86_64)
      L3Cache L#0 (size=30720KB linesize=64 ways=24)
        L2Cache L#0 (size=256KB linesize=64 ways=8)
          L1dCache L#0 (size=32KB linesize=64 ways=8)
            L1iCache L#0 (size=32KB linesize=64 ways=4)
              Core L#0 (P#0)
                PU L#0 (P#0)
        L2Cache L#1 (size=256KB linesize=64 ways=8)
          L1dCache L#1 (size=32KB linesize=64 ways=8)
            L1iCache L#1 (size=32KB linesize=64 ways=4)
              Core L#1 (P#1)
                PU L#1 (P#1)

So I guess each core has its own L1 and L2 caches.  Maybe I shouldn't care 
where or if the MPI processes are bound within a socket; if I can test it, that 
will be good enough for me.

So my initial question is now changed to:

What is the best/easiest way to get this mapping?  Rankfile?, --cpus-per-proc 2 
--bind-to-socket, or something else? 

RANK  SOCKET  CORE
0       0       unspecified
1       0       unspecified
2       1       unspecified
3       1       unspecified


Thanks

-----Original Message-----
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf 
Of Brice Goglin
Sent: Wednesday, November 07, 2012 6:17 PM
To: us...@open-mpi.org
Subject: EXTERNAL: Re: [OMPI users] Best way to map MPI processes to sockets?

What processor and kernel is this? (see /proc/cpuinfo, or run "lstopo -v" and 
look for attributes on the Socket line) You're hwloc output looks like an Intel 
Xeon Westmere-EX (E7-48xx or E7-88xx).
The likwid output is likely wrong (maybe confused by the fact that hardware 
threads are disabled).

Brice





Reply via email to