Hi Mark,

Given the same hardware, optimal configuration (I have no idea what that
means exactly but feel free to specify), which is supposed to perform
better, kernel rbd or qemu/kvm? Thanks,

Yun


On Fri, May 10, 2013 at 6:56 PM, Mark Nelson <mark.nel...@inktank.com>wrote:

> On 05/10/2013 12:16 PM, Greg wrote:
>
>> Hello folks,
>>
>> I'm in the process of testing CEPH and RBD, I have set up a small
>> cluster of  hosts running each a MON and an OSD with both journal and
>> data on the same SSD (ok this is stupid but this is simple to verify the
>> disks are not the bottleneck for 1 client). All nodes are connected on a
>> 1Gb network (no dedicated network for OSDs, shame on me :).
>>
>> Summary : the RBD performance is poor compared to benchmark
>>
>> A 5 seconds seq read benchmark shows something like this :
>>
>>>    sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg
>>> lat
>>>      0       0         0         0         0 0         -         0
>>>      1      16        39        23   91.9586        92 0.966117  0.431249
>>>      2      16        64        48   95.9602       100 0.513435   0.53849
>>>      3      16        90        74   98.6317       104 0.25631   0.55494
>>>      4      11        95        84   83.9735        40 1.80038   0.58712
>>>  Total time run:        4.165747
>>> Total reads made:     95
>>> Read size:            4194304
>>> Bandwidth (MB/sec):    91.220
>>>
>>> Average Latency:       0.678901
>>> Max latency:           1.80038
>>> Min latency:           0.104719
>>>
>>
>> 91MB read performance, quite good !
>>
>> Now the RBD performance :
>>
>>> root@client:~# dd if=/dev/rbd1 of=/dev/null bs=4M count=100
>>> 100+0 records in
>>> 100+0 records out
>>> 419430400 bytes (419 MB) copied, 13.0568 s, 32.1 MB/s
>>>
>>
>> There is a 3x performance factor (same for write: ~60M benchmark, ~20M
>> dd on block device)
>>
>> The network is ok, the CPU is also ok on all OSDs.
>> CEPH is Bobtail 0.56.4, linux is 3.8.1 arm (vanilla release + some
>> patches for the SoC being used)
>>
>> Can you show me the starting point for digging into this ?
>>
>
> Hi Greg, First things first, are you doing kernel rbd or qemu/kvm?  If you
> are doing qemu/kvm, make sure you are using virtio disks.  This can have a
> pretty big performance impact.  Next, are you using RBD cache? With 0.56.4
> there are some performance issues with large sequential writes if cache is
> on, but it does provide benefit for small sequential writes.  In general
> RBD cache behaviour has improved with Cuttlefish.
>
> Beyond that, are the pools being targeted by RBD and rados bench setup the
> same way?  Same number of Pgs?  Same replication?
>
>
>
>> Thanks!
>> ______________________________**_________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**com<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>>
>
> ______________________________**_________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**com<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to