On Jul 8, 2013, at 11:35 AM, Michael Thomadakis <drmichaelt7...@gmail.com> 
wrote:

> The issue is that when you read or write PCIe_gen 3 dat to a non-local NUMA 
> memory, SandyBridge will use the inter-socket QPIs to get this data across to 
> the other socket. I think there is considerable limitation in PCIe I/O 
> traffic data going over the inter-socket QPI. One way to get around this is 
> for reads to buffer all data into memory space local to the same socket and 
> then transfer them by code across to the other socket's physical memory. For 
> writes the same approach can be used with intermediary process copying data.

Sure, you'll cause congestion across the QPI network when you do non-local PCI 
reads/writes.  That's a given.

But I'm not aware of a hardware limitation on PCI-requested traffic across QPI 
(I could be wrong, of course -- I'm a software guy, not a hardware guy).  A 
simple test would be to bind an MPI process to a far NUMA node and run a simple 
MPI bandwidth test and see if to get better/same/worse bandwidth compared to 
binding an MPI process on a near NUMA socket.

But in terms of doing intermediate (pipelined) reads/writes to local NUMA 
memory before reading/writing to PCI, no, Open MPI does not do this.  Unless 
there is a PCI-QPI bandwidth constraint that we're unaware of, I'm not sure why 
you would do this -- it would likely add considerable complexity to the code 
and it would definitely lead to higher overall MPI latency.

Don't forget that the MPI paradigm is for the application to provide the 
send/receive buffer.  Meaning: MPI doesn't (always) control where the buffer is 
located (particularly for large messages).

> I was wondering if OpenMPI does anything special memory mapping to work 
> around this.

Just what I mentioned in the prior email.

> And if with Ivy Bridge (or Haswell) he situation has improved.

Open MPI doesn't treat these chips any different.

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/


Reply via email to