* Zhi Yong Wu <wu...@linux.vnet.ibm.com> [2011-07-28 05:53]: > The main goal of the patch is to effectively cap the disk I/O speed or counts > of one single VM.It is only one draft, so it unavoidably has some drawbacks, > if you catch them, please let me know. > > The patch will mainly introduce one block I/O throttling algorithm, one timer > and one block queue for each I/O limits enabled drive. > > When a block request is coming in, the throttling algorithm will check if its > I/O rate or counts exceed the limits; if yes, then it will enqueue to the > block queue; The timer will periodically handle the I/O requests in it. > > Some available features follow as below: > (1) global bps limit. > -drive bps=xxx in bytes/s > (2) only read bps limit > -drive bps_rd=xxx in bytes/s > (3) only write bps limit > -drive bps_wr=xxx in bytes/s > (4) global iops limit > -drive iops=xxx in ios/s > (5) only read iops limit > -drive iops_rd=xxx in ios/s > (6) only write iops limit > -drive iops_wr=xxx in ios/s > (7) the combination of some limits. > -drive bps=xxx,iops=xxx > > Known Limitations: > (1) #1 can not coexist with #2, #3 > (2) #4 can not coexist with #5, #6 > (3) When bps/iops limits are specified to a small value such as 511 bytes/s, > this VM will hang up. We are considering how to handle this senario. >
I don't yet have detailed info , but we've got a memory leak in the code. After running the VM with a 1MB r and w limit for 8 hours or so: -drive bps_rd=$((1*1024*1024)),bps_wr=$((1*1024*1024)) I've got my system swapping with 43G resident in memory: 9913 root 20 0 87.3g 43g 548 D 9.6 34.5 44:00.87 qemu-system-x86 would be worth looking through the code and maybe a valgrind run to catch the leak. -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ry...@us.ibm.com