>>we both VM use 
>> <driver name='qemu' type='raw' cache='directsync' io='native'/>

Note that with librbd : directsync|none = rbd_cache=false  , 
writeback|writethrough = rbd_cache=true



>>and VM is unable to mount /dev/rbd0 directly to test the speed..
That's really strange...


>>and i think technically, librbd should be much beter performance than mouting 
>>/dev/rbd0.. but the actual test looks not the cases, anything i did wrongly, 
>>or any performance tuning >>required...

mmm,not sure. All past test always show little bit more performance with krbd.
The main bottleneck is that with librbd, qemu can be cpu limited (currently 
qemu can use 1thread by disk, so 1core, and with a lots of iops you could have 
better performance with krbd).

But in for your bench I don't think it's the case.

do you have tried to bench with "fio", doing more parallel thread, bigger queue 
depth ?
Maybe krbd has been latency here, and dd is a single stream, do it could impact 
resultw.



thank you!


----- Mail original -----
De: "Bill WONG" <wongahsh...@gmail.com>
À: "aderumier" <aderum...@odiso.com>
Cc: "ceph-users" <ceph-users@lists.ceph.com>
Envoyé: Vendredi 28 Octobre 2016 17:58:42
Objet: Re: [ceph-users] RBD Block performance vs rbd mount as filesystem

hi, 
we both VM use 
<driver name='qemu' type='raw' cache='directsync' io='native'/> 
and VM is unable to mount /dev/rbd0 directly to test the speed.. 
and i think technically, librbd should be much beter performance than mouting 
/dev/rbd0.. but the actual test looks not the cases, anything i did wrongly, or 
any performance tuning required... 
thank you! 

Bill 

On Fri, Oct 28, 2016 at 5:47 PM, Alexandre DERUMIER < [ 
mailto:aderum...@odiso.com | aderum...@odiso.com ] > wrote: 


Hi, 
do you have tried to enable cache=writeback when you use librbd ? 

Could be interesting to see performance with using /dev/rbd0 in your vm, 
instead mounting a qcow2 inside. 

----- Mail original ----- 
De: "Bill WONG" < [ mailto:wongahsh...@gmail.com | wongahsh...@gmail.com ] > 
À: "ceph-users" < [ mailto:ceph-users@lists.ceph.com | 
ceph-users@lists.ceph.com ] > 
Envoyé: Vendredi 28 Octobre 2016 10:24:50 
Objet: [ceph-users] RBD Block performance vs rbd mount as filesystem 

Hi All, 
we have build a Ceph with 72 OSDs, replica 3, all working fine. we have done 
some performance testing. we found a very interesting issue. 
we have a KVM + Libvirt + Ceph setup 

Case 1. KVM + Libvirt + Ceph w/ rbd backend 
The kvm hypervisor node create a VM and use rbd block device as storage 
backend. we do dd fdatasync, it got ~500MB/s 

Case 2. KVM + Libvirt + Ceph 
The KVM hypervisor node mount the Ceph storage pool directly as local partition 
eg. mount /dev/rbd0 /mnt/VM_pool, we we create VM with format qcow2 and place 
the vm disk under that partition, and do the same dd with fdatasync, it got 
~850MB/s 

it is tested within same hypervisor node with same VM configurations. why the 
directory mount rbd0 to hypervisor filesystem as partition will be much good 
performance?..any idea on this ? 

thank you! 

---- KVM Ceph VM disk setting -- 
<disk type='network' device='disk'> 
<source protocol='rbd' name='VM_pool/VM1.img' > 
<host name='mon1' port='6789'/> 
<host name='mon2' port='6789'/> 
<host name='mon3' port='6789'/> 
</source> 
<auth username='libvirt' type='ceph'> 
<secret type='ceph' uuid='856b660e-ce4e-4a91-a7be-f17e469024c5'/> 
</auth> 
<target dev='vda' bus='virtio'/> 
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> 
</disk> 

-- KVM VM disk created at Ceph rbd0 partition 
<disk type='file' device='disk'> 
<driver name='qemu' type='qcow2'/> 
<source file='/mnt/VM_Pool/CentOS1.qcow2'/> 
<target dev='vda' bus='virtio'/> 
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> 
</disk> 

================================= 

Bill 

_______________________________________________ 
ceph-users mailing list 
[ mailto:ceph-users@lists.ceph.com | ceph-users@lists.ceph.com ] 
[ http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com | 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ] 





_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to