This is where it actually confuses me. According to the ceph document 
(http://ceph.com/docs/master/rbd/qemu-rbd/),
"QEMU’s cache settings override Ceph’s default settings (i.e., settings that 
are not explicitly set in the Ceph configuration file). If you explicitly set 
RBD Cache settings in your Ceph configuration file, your Ceph settings override 
the QEMU cache settings. If you set cache settings on the QEMU command line, 
the QEMU command line settings override the Ceph configuration file settings."
If I set the qemu caching parameter in nova.conf by,
      disk_cachemodes="network=writeback"
This would give me "cache=writeback" in the qemu cmd argument for rbd device 
when VM is created. According to the ceph doc above,  It would be equivalent to 
setting of "rbd_cache = true". Since I am not specifying any other rbd 
parameters (e.g, rbd_cache_size, etc) in the qemu command line (or it can't be 
done anyway according to the blueprint), those should be default to what I have 
set in ceph.conf?
Or my understanding of the ceph document is completely off-base?
--weiguo
P.S.,   
My original question is actually regarding how the "if=" parameter impacts the 
rbd performance, which is not directly related to rbd caching configuration.



From: sebastien....@enovance.com
Date: Thu, 13 Jun 2013 15:59:06 +0200
To: oliver.fran...@filoo.de
CC: ceph-users@lists.ceph.com; ws...@hotmail.com
Subject: Re: [ceph-users] QEMU -drive setting (if=none) for rbd

OpenStack doesn't know how to set different caching options for attached block 
device.See the following blueprint, 
https://blueprints.launchpad.net/nova/+spec/enable-rbd-tuning-options
This might be implemented for Havana.
Cheers.
––––
Sébastien Han
Cloud Engineer

"Always give 100%. Unless you're giving blood."




Phone : +33 (0)1 49 70 99 72 – Mobile : +33 (0)6 52 84 44 70Email : 
sebastien....@enovance.com – Skype : han.sbastienAddress : 10, rue de la 
Victoire – 75009 ParisWeb : www.enovance.com – Twitter : @enovance

On Jun 11, 2013, at 7:43 PM, Oliver Francke <oliver.fran...@filoo.de> wrote:

Hi,

Am 11.06.2013 um 19:14 schrieb w sun <ws...@hotmail.com>:

Hi,

We are currently testing the performance with rbd caching enabled with 
write-back mode on our openstack (grizzly) nova nodes. By default, nova fires 
up the rbd volumes with "if=none" mode evidenced by the following cmd line from 
"ps | grep".

-drive 
file=rbd:ceph-openstack-volumes/volume-949e2e32-20c7-45cf-b41b-46951c78708b:id=ceph-openstack-volumes:key=12347I9RsEoIDBAAi2t+M6+7zMMZoMM+aasiog==:auth_supported=cephx\;none,if=none,id=drive-virtio-disk0,format=raw,serial=949e2e32-20c7-45cf-b41b-46951c78708b,cache=writeback
 

Does anyone know if this should be set to anything else (e.g., if=virtio 
suggested by some qemu posts in general)? Given that the underline network 
stack for RBD IO is provided by the linux kenerl instead, does this option bear 
any relevance for rbd volume performance inside guest VM?

there should be something like "-device 
virtio-blk-pci,drive=drive-virtio-disk0" in reference to the id= for the 
drive-specification.

Furthermore to really check rbd_cache there is s/t like:

rbd_cache=true:rbd_cache_size=33554432:rbd_cache_max_dirty=16777216:rbd_cache_target_dirty=8388608

missing in the ":"-list, perhaps after 
:none:rbd_cache=true:rbd_cache_size=33554432:rbd_cache_max_dirty=16777216:rbd_cache_target_dirty=8388608

cache=writeback is necessary, too.
No idea, though, how to teach openstack to use these parameters, sorry.


Regards,

Oliver.


Thanks. --weiguo




_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
                                          

<<inline: image.png>>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to