On Thu, Nov 03, 2011 at 02:34:24PM -0200, Marcelo Tosatti wrote: > On Wed, Nov 02, 2011 at 03:43:49PM +0000, Stefan Hajnoczi wrote: > > On Tue, Nov 1, 2011 at 6:06 PM, Marcelo Tosatti <mtosa...@redhat.com> wrote: > > > On Thu, Oct 27, 2011 at 04:22:50PM +0100, Stefan Hajnoczi wrote: > > >> + > > >> + for (sector_num = 0; sector_num < end; sector_num += n) { > > >> + if (block_job_is_cancelled(&s->common)) { > > >> + break; > > >> + } > > > > > > If cancellation is seen here in the last loop iteration, > > > bdrv_change_backing_file below should not be executed. > > > > I documented this case in the QMP API. I'm not sure if it's possible > > to guarantee that the operation isn't just completing as you cancel > > it. Any blocking point between completion of the last iteration and > > completing the operation is vulnerable to missing the cancel. It's > > easier to explicitly say the operation might just have completed when > > you canceled, rather than trying to protect the completion path. Do > > you think it's a problem to have these loose semantics that I > > described? > > No, that is ok. I'm referring to bdrv_change_backing_file() being > executed without the entire image being streamed. > > "if (sector_num == end && ret == 0)" includes both all sectors being > streamed and all sectors except the last iteration being streamed (due > to job cancelled break).
I don't see the case you mention. Here is the code again: for (sector_num = 0; sector_num < end; sector_num += n) { if (block_job_is_cancelled(&s->common)) { break; } If we are on the last iteration, then sector_num = end - m, where m > 0 and is the number of sectors we are about to stream. If we are cancelled during this last iteration then sector_num == end - m. Therefore the "if (sector_num == end && ret == 0)" case does not evaluate to true. The only way we can reach sector_num == end is by having successfully streamed those last m sectors. Why? Because sector_num is a 0-based index and not a 1-based index, so it excludes end. > > >> + > > >> + /* TODO rate-limit */ > > >> + /* Note that even when no rate limit is applied we need to > > >> yield with > > >> + * no pending I/O here so that qemu_aio_flush() is able to > > >> return. > > >> + */ > > >> + co_sleep_ns(rt_clock, 0); > > > > > > How do you plan to implement rate limit? > > > > It was implemented in the QED-specific image streaming series: > > > > http://repo.or.cz/w/qemu/stefanha.git/commitdiff/22f2c09d2fcfe5e49ac4604fd23e4744f549a476 > > > > That implementation works fine and is small but I'd like to reuse the > > migration speed limit, if possible. That way we don't have 3 > > different rate-limiting implementations in QEMU :). > > One possibility would be to create a "virtual" block device for > streaming, sitting on top of the real block device. Then enforce block > I/O limits on the virtual block device, the guest would remain accessing > the real block device. That's an interesting idea. I have also experimented with rate-limiting and it seems the common code is really small - the rate-limiting code is quite short to begin with. So I'm now tending to reimplementing it. Stefan