<quote>
I don't think the Open MPI TCP BTL will pass the SDP socket type when
creating sockets -- SDP is much lower performance than native verbs/RDMA.
You should use a "native" interface to your RDMA network instead (which one
you use depends on which kind of network you have).
</quote>

I have a rather naive follow-up question along this line: why is there not
a native mode for (garden variety) Ethernet? Is it because it lacks the
end-to-end guarantees of TCP, Infiniband and the like? These days, switched
Ethernet is very reliable, isn't it? (I mean by the rate of packet drop
because of congestion). So if the application only needs data chunks of
around 8KB max, which would not need to be fragmented (using jumbo frames),
won't a native ethernet be much more efficient?

Or perhaps these constraints are too limiting in practice?

Thanks
Durga

Life is complex. It has real and imaginary parts.

On Tue, Mar 1, 2016 at 9:54 PM, Jeff Squyres (jsquyres) <jsquy...@cisco.com>
wrote:

> On Mar 1, 2016, at 6:55 PM, Matthew Larkin <lar...@yahoo.com> wrote:
> >
> > As far as PCIe, I am looking into:
> >
> > 1. Dolphin's implementation of IPoPCIe
>
> If it provides a TCP stack and an IP interface, you should be able to use
> Open MPI's TCP BTL interface over it.
>
> > 2. SDP protocol and how it can be utilized, mapping TCP to RDMA
>
> I don't think the Open MPI TCP BTL will pass the SDP socket type when
> creating sockets -- SDP is much lower performance than native verbs/RDMA.
> You should use a "native" interface to your RDMA network instead (which one
> you use depends on which kind of network you have).
>
> > Not sure if the only answer for this is a custom stack, API/kernel
> module.
> >
> > Do you have any input on the above mentioned things?
> >
> > On Tuesday, March 1, 2016 6:42 AM, Jeff Squyres (jsquyres) <
> jsquy...@cisco.com> wrote:
> >
> >
> > On Feb 29, 2016, at 6:48 PM, Matthew Larkin <lar...@yahoo.com> wrote:
> > >
> > > 1. I know OpenMPI supports ethernet, but where does it clearly state
> that?
> > > - I see on the FAQ on the web page there is a whole list of network
> interconnect, but how do I relate that to Ethernet network etc.?
> >
> > Open MPI actually supports multiple Ethernet-based interconnects: Cisco
> usNIC, iWARP, Mellanox RoCE, and TCP sockets.
> >
> > I suspect the one you are asking about is TCP sockets (which technically
> doesn't need to run over Ethernet, but TCP-over-Ethernet is probably its
> most common use case).
> >
> >
> > > 2. Does OpenMPI work with PCIe and PCIe switch?
> > > - Is there any specific configuration to get it to work?
> >
> >
> > Do you have a specific vendor device / networking stack in mind?  In
> general, Open MPI will use:
> >
> > - some kind of local IPC mechanism (e.g., some flavor of shared memory)
> for intra-node communication
> > - some kind of networking API for inter-node communication
> >
> > Extending PCIe between servers blurs this line a bit -- peer MPI
> processes on a remote server can make it look like they are actually
> local.  So it depends on your network stack: do you have some kind of API
> that effects messaging transport over PCIe?
> >
> > --
> > Jeff Squyres
> > jsquy...@cisco.com
> > For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
> >
> >
> >
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/03/28613.php
>

Reply via email to