I don't have super recent results, but we do have some test data from last year looking at kernel rbd, rbd-nbd, rbd+tcmu, fuse, etc:

https://docs.google.com/spreadsheets/d/1oJZ036QDbJQgv2gXts1oKKhMOKXrOI2XLTkvlsl9bUs/edit?usp=sharing


Generally speaking going through the tcmu layer was slower than kernel rbd or librbd directly (sometimes by quite a bit!).  There was also more client side CPU usage per unit performance as well (which makes sense since there's additional work being done).  You may be able to get some of that performance back with more clients as I do remember there being some issues with iodepth and tcmu. The only setup that I remember being slower at the time though was rbd-fuse which I don't think is even really maintained.


Mark


On 10/5/20 4:43 PM, dhils...@performair.com wrote:
All;

I've finally gotten around to setting up iSCSI gateways on my primary 
production cluster, and performance is terrible.

We're talking 1/4 to 1/3 of our current solution.

I see no evidence of network congestion on any involved network link.  I see no 
evidence CPU or memory being a problem on any involved server (MON / OSD / 
gateway /client).

What can I look at to tune this, preferably on the iSCSI gateways?

Thank you,

Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International, Inc.
dhils...@performair.com
www.PerformAir.com

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to