Re: [OMPI users] How to set paffinity on a multi-cpu node?

2006-12-02 Thread Jeff Squyres
FWIW, especially on NUMA machines (like AMDs), physical access to network resources (such as NICs / HCAs) can be much faster on specific sockets. For example, we recently ran some microbenchmarks showing that if you run 2 MPI processes across 2 NUMA machines (e.g., a simple ping-pong benc

Re: [OMPI users] How to set paffinity on a multi-cpu node?

2006-12-02 Thread Patrick Geoffray
Hi Jeff, Jeff Squyres wrote: I *believe* that this has to do with physical setup within the machine (i.e., the NIC/HCA bus is physically "closer" to some sockets), but I'm not much of a hardware guy to know that for sure. Someone with more specific knowledge should chime in here... On N

Re: [OMPI users] How to set paffinity on a multi-cpu node?

2006-12-02 Thread Brock Palen
On Dec 2, 2006, at 10:31 AM, Jeff Squyres wrote: FWIW, especially on NUMA machines (like AMDs), physical access to network resources (such as NICs / HCAs) can be much faster on specific sockets. For example, we recently ran some microbenchmarks showing that if you run 2 MPI processes across 2 N

Re: [OMPI users] How to set paffinity on a multi-cpu node?

2006-12-02 Thread Greg Lindahl
On Sat, Dec 02, 2006 at 10:31:30AM -0500, Jeff Squyres wrote: > FWIW, especially on NUMA machines (like AMDs), physical access to > network resources (such as NICs / HCAs) can be much faster on > specific sockets. Yes, the penalty is actually 50 ns per hop, and you pay it on both sides. So ou