Wei,

My opinion is that no features should be merged until all functional issues
have been resolved and it is ready to turn over to test.  Until the total
Ops vs discrete read/write ops issue is addressed and re-reviewed by Wido,
I don't think this criteria has been satisfied.

Also, how does this work intersect/compliment the SolidFire patch (
https://reviews.apache.org/r/11479/)?  As I understand it that work is also
involves provisioned IOPS.  I would like to ensure we don't have a scenario
where provisioned IOPS in KVM and SolidFire are unnecessarily incompatible.

Thanks,
-John

On Jun 1, 2013, at 6:47 AM, Wei ZHOU <ustcweiz...@gmail.com> wrote:

Wido,


Sure. I will change it next week.


-Wei



2013/6/1 Wido den Hollander <w...@widodh.nl>


Hi Wei,



On 06/01/2013 08:24 AM, Wei ZHOU wrote:


Wido,


Exactly. I have pushed the features into master.


If anyone object thems for technical reason till Monday, I will revert

them.


For the sake of clarity I just want to mention again that we should change

the total IOps to R/W IOps asap so that we never release a version with

only total IOps.


You laid the groundwork for the I/O throttling and that's great! We should

however prevent that we create legacy from day #1.


Wido


-Wei



2013/5/31 Wido den Hollander <w...@widodh.nl>


On 05/31/2013 03:59 PM, John Burwell wrote:


Wido,


+1 -- this enhancement must to discretely support read and write IOPS.

I

don't see how it could be fixed later because I don't see how we

correctly

split total IOPS into read and write.  Therefore, we would be stuck

with a

total unless/until we decided to break backwards compatibility.



What Wei meant was merging it into master now so that it will go in the

4.2 branch and add Read / Write IOps before the 4.2 release so that 4.2

will be released with Read and Write instead of Total IOps.


This is to make the May 31st feature freeze date. But if the window moves

(see other threads) then it won't be necessary to do that.


Wido



I also completely agree that there is no association between network

and


disk I/O.


Thanks,

-John


On May 31, 2013, at 9:51 AM, Wido den Hollander <w...@widodh.nl> wrote:


Hi Wei,



On 05/31/2013 03:13 PM, Wei ZHOU wrote:


Hi Wido,


Thanks. Good question.


I  thought about at the beginning. Finally I decided to ignore the

difference of read and write mainly because the network throttling did

not

care the difference of sent and received bytes as well.

That reasoning seems odd. Networking and disk I/O completely different.


Disk I/O is much more expensive in most situations then network

bandwith.


Implementing it will be some copy-paste work. It could be

implemented in


few days. For the deadline of feature freeze, I will implement it

after

that , if needed.



It think it's a feature we can't miss. But if it goes into the 4.2

window we have to make sure we don't release with only total IOps and

fix

it in 4.3, that will confuse users.


Wido


-Wei






2013/5/31 Wido den Hollander <w...@widodh.nl>


Hi Wei,




On 05/30/2013 06:03 PM, Wei ZHOU wrote:


Hi,


I would like to merge disk_io_throttling branch into master.

If nobody object, I will merge into master in 48 hours.

The purpose is :


Virtual machines are running on the same storage device (local

storage or

share strage). Because of the rate limitation of device (such as

iops), if

one VM has large disk operation, it may affect the disk performance

of

other VMs running on the same storage device.

It is neccesary to set the maximum rate and limit the disk I/O of

VMs.



Looking at the code I see you make no difference between Read and

Write

IOps.


Qemu and libvirt support setting both a different rate for Read and

Write

IOps which could benefit a lot of users.


It's also strange, in the polling side you collect both the Read and

Write

IOps, but on the throttling side you only go for a global value.


Write IOps are usually much more expensive then Read IOps, so it

seems

like a valid use-case where that an admin would set a lower value for

write

IOps vs Read IOps.


Since this only supports KVM at this point I think it would be of

great

value to at least have the mechanism in place to support both,

implementing

this later would be a lot of work.


If a hypervisor doesn't support setting different values for read and

write you can always sum both up and set that as the total limit.


Can you explain why you implemented it this way?


Wido


 The feature includes:



(1) set the maximum rate of VMs (in disk_offering, and global

configuration)

(2) change the maximum rate of VMs

(3) limit the disk rate (total bps and iops)

JIRA ticket: https://issues.apache.org/****

jira/browse/CLOUDSTACK-1192<ht**tps://issues.apache.org/****

jira/browse/CLOUDSTACK-1192<
https://issues.apache.org/**jira/browse/CLOUDSTACK-1192>

<ht**tps://issues.apache.org/**jira/**browse/CLOUDSTACK-1192<
http://issues.apache.org/jira/**browse/CLOUDSTACK-1192>

<**
https://issues.apache.org/**jira/browse/CLOUDSTACK-1192<https://issues.apache.org/jira/browse/CLOUDSTACK-1192>



FS (I will update later) :

https://cwiki.apache.org/******confluence/display/CLOUDSTACK/******<https://cwiki.apache.org/****confluence/display/CLOUDSTACK/****>

<
https://cwiki.apache.org/****confluence/display/**CLOUDSTACK/**<https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**>

VM+Disk+IO+Throttling<https://****cwiki.apache.org/confluence/****<
http://cwiki.apache.org/confluence/**>

display/CLOUDSTACK/VM+Disk+IO+****Throttling<https://cwiki.**

apache.org/confluence/display/**CLOUDSTACK/VM+Disk+IO+**Throttling<
https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Throttling
>



Merge check list :-


* Did you check the branch's RAT execution success?

Yes


* Are there new dependencies introduced?

No


* What automated testing (unit and integration) is included in the

new

feature?

Unit tests are added.


* What testing has been done to check for potential regressions?

(1) set the bytes rate and IOPS rate on CloudStack UI.

(2) VM operations, including

deploy, stop, start, reboot, destroy, expunge. migrate, restore

(3) Volume operations, including

Attach, Detach


To review the code, you can try

git diff c30057635d04a2396f84c588127d7e******be42e503a7

f2e5591b710d04cc86815044f5823e******73a4a58944


Best regards,

Wei


[1]

https://cwiki.apache.org/******confluence/display/CLOUDSTACK/******<https://cwiki.apache.org/****confluence/display/CLOUDSTACK/****>

<
https://cwiki.apache.org/****confluence/display/**CLOUDSTACK/**<https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**>

VM+Disk+IO+Throttling<https://****cwiki.apache.org/confluence/****<
http://cwiki.apache.org/confluence/**>

display/CLOUDSTACK/VM+Disk+IO+****Throttling<https://cwiki.**

apache.org/confluence/display/**CLOUDSTACK/VM+Disk+IO+**Throttling<
https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Throttling
>



[2] refs/heads/disk_io_throttling

[3]
https://issues.apache.org/******jira/browse/CLOUDSTACK-1301<https://issues.apache.org/****jira/browse/CLOUDSTACK-1301>

<ht**tps://issues.apache.org/****jira/browse/CLOUDSTACK-1301<
https://issues.apache.org/**jira/browse/CLOUDSTACK-1301>

<ht**tps://issues.apache.org/**jira/**browse/CLOUDSTACK-1301<
http://issues.apache.org/jira/**browse/CLOUDSTACK-1301>

<**
https://issues.apache.org/**jira/browse/CLOUDSTACK-1301<https://issues.apache.org/jira/browse/CLOUDSTACK-1301>



<ht**tps://issues.apache.org/****jira/**browse/CLOUDSTACK-2071<
http://issues.apache.org/**jira/**browse/CLOUDSTACK-2071>

**<
http://issues.apache.org/**jira/**browse/CLOUDSTACK-2071<http://issues.apache.org/jira/**browse/CLOUDSTACK-2071>

<**
https://issues.apache.org/****jira/browse/CLOUDSTACK-2071<https://issues.apache.org/**jira/browse/CLOUDSTACK-2071>

<h**ttps://issues.apache.org/jira/**browse/CLOUDSTACK-2071<
https://issues.apache.org/jira/browse/CLOUDSTACK-2071>



(**CLOUDSTACK-1301

-     VM Disk I/O Throttling)

Reply via email to