As far as PCIe, I am looking into: 1. Dolphin's implementation of IPoPCIe 2. SDP protocol and how it can be utilized, mapping TCP to RDMA Not sure if the only answer for this is a custom stack, API/kernel module. Do you have any input on the above mentioned things?
On Tuesday, March 1, 2016 6:42 AM, Jeff Squyres (jsquyres) <jsquy...@cisco.com> wrote: On Feb 29, 2016, at 6:48 PM, Matthew Larkin <lar...@yahoo.com> wrote: > > 1. I know OpenMPI supports ethernet, but where does it clearly state that? > - I see on the FAQ on the web page there is a whole list of network > interconnect, but how do I relate that to Ethernet network etc.? Open MPI actually supports multiple Ethernet-based interconnects: Cisco usNIC, iWARP, Mellanox RoCE, and TCP sockets. I suspect the one you are asking about is TCP sockets (which technically doesn't need to run over Ethernet, but TCP-over-Ethernet is probably its most common use case). > 2. Does OpenMPI work with PCIe and PCIe switch? > - Is there any specific configuration to get it to work? Do you have a specific vendor device / networking stack in mind? In general, Open MPI will use: - some kind of local IPC mechanism (e.g., some flavor of shared memory) for intra-node communication - some kind of networking API for inter-node communication Extending PCIe between servers blurs this line a bit -- peer MPI processes on a remote server can make it look like they are actually local. So it depends on your network stack: do you have some kind of API that effects messaging transport over PCIe? -- Jeff Squyres jsquy...@cisco.com For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/