On 03/05/2014 07:24 AM, git harry wrote:
Hi,
https://blueprints.launchpad.net/cinder/+spec/cinder-rbd-driver-qos
I've been looking at this blueprint with a view to contributing on it, assuming
I can take it. I am unclear as to whether or not it is still valid. I can see
that it was registered around a year ago and it appears the functionality is
essentially already supported by using multiple backends.
Looking at the existing drivers that have qos support it appears IOPS etc are
available for control/customisation. As I understand it Ceph has no qos type
control built-in and creating pools using different hardware is as granular as
it gets. The two don't quite seem comparable to me so I was hoping to get some
feedback, as to whether or not this is still useful/appropriate, before
attempting to do any work.
Ceph does not currently have any qos support, but relies on QEMU's io
throttling, which Cinder and Nova can configure.
There is interest in adding better throttling to Ceph itself though,
since writes from QEMU may be combined before writing to Ceph when
caching is used. There was a session on this at the Ceph developer
summit earlier this week:
https://wiki.ceph.com/Planning/CDS/CDS_Giant_%28Mar_2014%29#rbd_qos
Josh
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev