Here's my advice: Don't trust anyones advice. Benchmark it yourself and see.

The problems vary so wildly that only you can tell if your problem will benefit from oversubscription. It really depends on too many factors to accurately predict: schedulers, memory usage, network/interconnect hardware, disk seek times, and probably a hundred other things.

I've even seen mixed results from oversubscribing within a single algorithm. (Granted this is mostly with the older generation hyperthreading, so I'm not sure how things fare with nehalem). The most notable effect I've observed is related to cache use. If the problem fits in cache it is much faster. With cores sharing cache it can even be advantageous to *undersubscribe* the problem. i.e. schedule 2 processes on a quad core so each can have the full cache.

-- Mark Borgerding



Klymak Jody wrote:
Hi Robert,

I ran some very crude tests and found that things slowed down once you got over 8 cores at a time. However, they didn't slow down by 50% if you went to 16 processes. Sadly, the tests were so crude, I did not keep good notes (it appears).

I'm running a gcm, so my benchmarks may not be very useful to most folks. If there was an easy-to-compile benhmark that I could run on my cluster, I'd be curious what the results are too.

Thanks,  Jody

On 11-Jul-09, at 2:16 PM, Robert Kubrick wrote:

The Open MPI FAQ recommends not to oversubscribe the available cores for best performances, but is this still true? The new Nehalem processors are built to run 2 threads on each core. On a 8 sockets systems, that sums up to 128 threads that Intel claims can be run without significant performance degradation. I guess the last word is to those who have tried to run some benchmarks and applications on the new Intel processors. Any experience to share?

http://www.open-mpi.org/faq/?category=running#oversubscribing
http://en.wikipedia.org/wiki/Simultaneous_multithreading
http://communities.intel.com/community/openportit/server/blog/2009/06/11/nehalem-ex-brings-new-economics-to-scalable-systems
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Mark Borgerding
3dB Labs, Inc
Innovate.  Develop.  Deliver.

Reply via email to