Yes Jeff, I'm pretty sure indeed that hyperthreading is enabled, since 16 CPUs are visible in the /proc/cpuinfo pseudo-file, while it's a 8 core Nehalem node.

However, I always carefully checked that only 8 processes are running on each node. Could it be that they are assigned to 8 hyperthreads but only 4 cores, for example ?
Is this actually possible with paffinity set to 1 ?

 Thanks,   G.



Le 06/01/2011 21:21, Jeff Squyres a écrit :
(now that we're back from vacation)

Actually, this could be an issue.  Is hyperthreading enabled on your machine?

Can you send the text output from running hwloc's "lstopo" command on your 
compute nodes?

I ask because if hyperthreading is enabled, OMPI might be assigning one process 
per *hyerthread* (vs. one process per *core*).  And that could be disastrous 
for performance.



On Dec 22, 2010, at 2:25 PM, Gilbert Grosdidier wrote:

Hi David,

Yes, I set mpi_paffinity_alone to 1. Is that right and sufficient, please ?

Thanks for your help,   Best,   G.



Le 22/12/2010 20:18, David Singleton a écrit :
Is the same level of processes and memory affinity or binding being used?

On 12/21/2010 07:45 AM, Gilbert Grosdidier wrote:
Yes, there is definitely only 1 process per core with both MPI implementations.

Thanks, G.


Le 20/12/2010 20:39, George Bosilca a écrit :
Are your processes places the same way with the two MPI implementations? 
Per-node vs. per-core ?

george.

Reply via email to