How does RBD cache work? I wasn't able to find an adequate explanation in
the docs.

On Sunday, June 22, 2014, Mark Kirkwood <mark.kirkw...@catalyst.net.nz>
wrote:

> Good point, I had neglected to do that.
>
> So, amending my conf.conf [1]:
>
> [client]
> rbd cache = true
> rbd cache size = 2147483648
> rbd cache max dirty = 1073741824
> rbd cache max dirty age = 100
>
> and also the VM's xml def to include cache to writeback:
>
>     <disk type='network' device='disk'>
>       <driver name='qemu' type='raw' cache='writeback' io='native'/>
>       <auth username='admin'>
>         <secret type='ceph' uuid='cd2d3ab1-2d31-41e0-ab08-3d0c6e2fafa0'/>
>       </auth>
>       <source protocol='rbd' name='rbd/vol1'>
>         <host name='192.168.1.64' port='6789'/>
>       </source>
>       <target dev='vdb' bus='virtio'/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x07'
> function='0x0'/>
>     </disk>
>
> Retesting from inside the VM:
>
> $ dd if=/dev/zero of=/mnt/vol1/scratch/file bs=16k count=65535 oflag=direct
> 65535+0 records in
> 65535+0 records out
> 1073725440 bytes (1.1 GB) copied, 8.1686 s, 131 MB/s
>
> Which is much better, so certainly for the librbd case enabling the rbd
> cache seems to nail this particular issue.
>
> Regards
>
> Mark
>
> [1] possibly somewhat agressively set, but at least a noticeable
> difference :-)
>
> On 22/06/14 19:02, Haomai Wang wrote:
>
>> Hi Mark,
>>
>> Do you enable rbdcache? I test on my ssd cluster(only one ssd), it seemed
>> ok.
>>
>>  dd if=/dev/zero of=test bs=16k count=65536 oflag=direct
>>>
>>
>> 82.3MB/s
>>
>>
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to