Somnath, Sounds very promising! I can't wait to try it on my cluster as I am currently using IPOIB instread of the native rdma.
Cheers Andrei ----- Original Message ----- > From: "Somnath Roy" <somnath....@sandisk.com> > To: "Andrei Mikhailovsky" <and...@arhont.com>, "Andrey Korolyov" > <and...@xdel.ru> > Cc: ceph-users@lists.ceph.com, "ceph-devel" > <ceph-de...@vger.kernel.org> > Sent: Wednesday, 8 April, 2015 5:23:23 PM > Subject: RE: [ceph-users] Preliminary RDMA vs TCP numbers > Andrei, > Yes, I see it has lot of potential and I believe fixing the > performance bottlenecks inside XIO messenger it should go further. > We are working on it and will keep community posted.. > Thanks & Regards > Somnath > From: Andrei Mikhailovsky [mailto:and...@arhont.com] > Sent: Wednesday, April 08, 2015 2:22 AM > To: Andrey Korolyov > Cc: ceph-users@lists.ceph.com; ceph-devel; Somnath Roy > Subject: Re: [ceph-users] Preliminary RDMA vs TCP numbers > Hi, > Am I the only person noticing disappointing results from the > preliminary RDMA testing, or am I reading the numbers wrong? > Yes, it's true that on a very small cluster you do see a great > improvement in rdma, but in real life rdma is used in large > infrastructure projects, not on a few servers with a handful of > osds. In fact, from what i've seen from the slides, the rdma > implementation scales horribly to the point that it becomes slower > the more osds you through at it. > From my limited knowledge, i have expected a much higher performance > gains with rdma, taking into account that you should have much lower > latency and overhead and lower cpu utilisation when using this > transport in comparison with tcp. > Are we likely to see a great deal of improvement with ceph and rdma > in a near future? Is there a roadmap for having a stable and > reliable rdma protocol support? > Thanks > Andrei > ----- Original Message ----- > > From: "Andrey Korolyov" < and...@xdel.ru > > > > To: "Somnath Roy" < somnath....@sandisk.com > > > > Cc: ceph-users@lists.ceph.com , "ceph-devel" < > > ceph-de...@vger.kernel.org > > > > Sent: Wednesday, 8 April, 2015 9:28:12 AM > > > Subject: Re: [ceph-users] Preliminary RDMA vs TCP numbers > > > On Wed, Apr 8, 2015 at 11:17 AM, Somnath Roy < > > somnath....@sandisk.com > wrote: > > > > > > > > Hi, > > > > Please find the preliminary performance numbers of TCP Vs RDMA > > > (XIO) implementation (on top of SSDs) in the following link. > > > > > > > > http://www.slideshare.net/somnathroy7568/ceph-on-rdma > > > > > > > > The attachment didn't go through it seems, so, I had to use > > > slideshare. > > > > > > > > Mark, > > > > If we have time, I can present it in tomorrow's performance > > > meeting. > > > > > > > > Thanks & Regards > > > > Somnath > > > > > > > Those numbers are really impressive (for small numbers at least)! > > What > > > are TCP settings you using?For example, difference can be lowered > > on > > > scale due to less intensive per-connection acceleration on CUBIC on > > a > > > larger number of nodes, though I do not believe that it was a main > > > reason for an observed TCP catchup on a relatively flat workload > > such > > > as fio generates. > > > _______________________________________________ > > > ceph-users mailing list > > > ceph-users@lists.ceph.com > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > PLEASE NOTE: The information contained in this electronic mail > message is intended only for the use of the designated recipient(s) > named above. If the reader of this message is not the intended > recipient, you are hereby notified that you have received this > message in error and that any review, dissemination, distribution, > or copying of this message is strictly prohibited. If you have > received this communication in error, please notify the sender by > telephone or e-mail (as shown above) immediately and destroy any and > all copies of this message in your possession (whether hard copies > or electronically stored copies).
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com