>>If you need caching on the hypervisor side it would probably better to 
>>use something like bcache/dmcache etc.
Not possible in my case, as I use qemu rbd block driver, not the rbd kernel 
module.


My concern was mainly using librbd cache
http://ceph.com/docs/master/rbd/rbd-config-ref/#rbd-cache-config-settings

which is enabled with cache=writeback


According the doc:
" it can coalesce contiguous requests for better throughput."

So they are optimisations specific to rbd when it's enabled.


(If someone have documentation about how exactly librbd cache is working, I'm 
interested)


----- Mail original ----- 

De: "Alex Crow" <ac...@integrafin.co.uk> 
À: ceph-users@lists.ceph.com 
Envoyé: Samedi 12 Avril 2014 17:26:40 
Objet: Re: [ceph-users] qemu + rbd block driver with cache=writeback, is live 
migration safe ? 

Hi. 

I've read in many places that you should never use writeback on any kind 
of shared storage. Caching is better dealt with on the storage side 
anyway as you have hopefully provided resilience there. In fact if your 
SAN/NAS is good enough it's supposed to be best to use "none" as the 
caching algo. 

If you need caching on the hypervisor side it would probably better to 
use something like bcache/dmcache etc. 

Cheers 

Alex 


On 12/04/14 16:01, Alexandre DERUMIER wrote: 
> Hello, 
> 
> I known that qemu live migration with disk with cache=writeback are not safe 
> with storage like nfs,iscsi... 
> 
> Is it also true with rbd ? 
> 
> 
> If yes, it is possible to disable manually writeback online with qmp ? 
> 
> Best Regards, 
> 
> Alexandre 
> _______________________________________________ 
> ceph-users mailing list 
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> 

_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to