Re: [ceph-users] RDMA/Infiniband status

2016-06-10 Thread Christian Balzer
Hello, What I took from the longish thread on the OFED ML was that certain things (and more than you'd think) with IPoIB happen in multicast, not ALL of them. For the record, my bog standard QDR, IPoIB clusters can do anywhere from 14 to 21Gb/s with iperf3 and about 20-30% less with NPtcp (netp

Re: [ceph-users] RDMA/Infiniband status

2016-06-10 Thread Corey Kovacs
Infiniband uses multicast internally. It's not something you have a choice with. You won't see it on the local interface any more than you'd see individual drives of a raid 5. I believe it's one of the reasons the connection setup speeds are kept under the requisite 1.2usec limits etc. On Jun 10

Re: [ceph-users] RDMA/Infiniband status

2016-06-10 Thread Daniel Swarbrick
On 10/06/16 02:33, Christian Balzer wrote: > > > This thread brings back memories of this one: > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-April/008792.html > > According to Robert IPoIB still uses IB multicast under the hood even when > from an IP perspective traffic would be uni

Re: [ceph-users] RDMA/Infiniband status

2016-06-09 Thread Christian Balzer
Hello, On Thu, 9 Jun 2016 20:28:41 +0200 Daniel Swarbrick wrote: > On 09/06/16 17:01, Gandalf Corvotempesta wrote: > > Il 09 giu 2016 15:41, "Adam Tygart" > > ha scritto: > >> > >> If you're > >> using pure DDR, you may need to tune the broadcast group in your > >> subnet

Re: [ceph-users] RDMA/Infiniband status

2016-06-09 Thread Daniel Swarbrick
On 09/06/16 17:01, Gandalf Corvotempesta wrote: > Il 09 giu 2016 15:41, "Adam Tygart" > ha scritto: >> >> If you're >> using pure DDR, you may need to tune the broadcast group in your >> subnet manager to set the speed to DDR. > > Do you know how to set this with opensm? > I

Re: [ceph-users] RDMA/Infiniband status

2016-06-09 Thread Adam Tygart
I believe this is what you want: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Networking_Guide/sec-Configuring_the_Subnet_Manager.html -- Adam On Thu, Jun 9, 2016 at 10:01 AM, Gandalf Corvotempesta wrote: > Il 09 giu 2016 15:41, "Adam Tygart" ha scritto: >> >> I

Re: [ceph-users] RDMA/Infiniband status

2016-06-09 Thread Gandalf Corvotempesta
Il 09 giu 2016 15:41, "Adam Tygart" ha scritto: > > If you're > using pure DDR, you may need to tune the broadcast group in your > subnet manager to set the speed to DDR. Do you know how to set this with opensm? I would like to bring up my test cluster again next days

Re: [ceph-users] RDMA/Infiniband status

2016-06-09 Thread Adam Tygart
IPoIB is done with broadcast packets on the Infiniband fabric. Most switches and opensm (by default) setup a broadcast group at the lowest IB speed (SDR), to support all possible IB connections. If you're using pure DDR, you may need to tune the broadcast group in your subnet manager to set the spe

Re: [ceph-users] RDMA/Infiniband status

2016-06-09 Thread Gandalf Corvotempesta
2016-06-09 10:18 GMT+02:00 Christian Balzer : > IPoIB is about half the speed of your IB layer, yes. Ok, so it's normal. I've seen benchmarks on net stating that IPoIB on DDR should reach about 16-17Gb/s I'll plan to move to QDR > And bandwidth is (usually) not the biggest issue, latency is. I'v

Re: [ceph-users] RDMA/Infiniband status

2016-06-09 Thread Christian Balzer
On Thu, 9 Jun 2016 10:00:33 +0200 Gandalf Corvotempesta wrote: > Last time i've used Ceph (about 2014) RDMA/Infiniband support was just > a proof of concept > and I was using IPoIB with low performance (about 8-10GB/s on a > Infiniband DDR 20Gb/s) > IPoIB is about half the speed of your IB layer,

[ceph-users] RDMA/Infiniband status

2016-06-09 Thread Gandalf Corvotempesta
Last time i've used Ceph (about 2014) RDMA/Infiniband support was just a proof of concept and I was using IPoIB with low performance (about 8-10GB/s on a Infiniband DDR 20Gb/s) This was 2 years ago. Any news about this? Is RDMA/Infiniband supported like with GlusterFS?