On 08/04/2014 10:52 AM, Tregaron Bayly wrote:
Does anyone have any insight on how we can tune librbd to perform closer
to the level of the rbd kernel module?
In our lab we have a four node cluster with 1GbE public network and
10GbE cluster network. A client node connects to the public network
with 10GbE.
When doing benchmarks on the client using the kernel module we get
decent performance and can cause the OSD nodes to max out their 1GbE
link at peak servicing the requests:
tx rx
max 833.66 Mbit/s | 639.44 Mbit/s
max 938.06 Mbit/s | 707.35 Mbit/s
max 846.78 Mbit/s | 702.04 Mbit/s
max 790.66 Mbit/s | 621.92 Mbit/s
However, using librbd we only get about 30% of performance and I can see
that it doesn't seem to generate requests fast enough to max out the
links on OSD nodes:
max 309.74 Mbit/s | 196.77 Mbit/s
max 300.15 Mbit/s | 154.38 Mbit/s
max 263.06 Mbit/s | 154.38 Mbit/s
max 368.91 Mbit/s | 234.38 Mbit/s
I know that I can play with cache settings to help give the client
better service on hits, but I'm wondering how I can soup up librbd so
that it can take advantage of more of the speed available in the
cluster. It seems like using librbd will leave a lot of the resources
idle.
Thanks in advance for any help,
Hi Tregaron,
In the lab we've been able to do way better than ~300Mbit/s with librbd
(with or without RBD cache). I suspect something unusual is going on
here. How are you doing the benchmarking?
Mark
Tregaron Bayly
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com