On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote: > Limit block_status querying to request bounds on write notifier to > avoid extra seeking.
I don’t understand this reasoning. Checking whether something is allocated for qcow2 should just mean an L2 cache lookup. Which we have to do anyway when we try to copy data off the source. I could understand saying this makes the code easier, but I actually don’t think it does. If you implemented checking the allocation status only for areas where the bitmap is reset (which I think this patch should), then it’d just duplicate the existing loop. > Signed-off-by: Vladimir Sementsov-Ogievskiy <vsement...@virtuozzo.com> > --- > block/backup.c | 38 +++++++++++++++++++++----------------- > 1 file changed, 21 insertions(+), 17 deletions(-) > > diff --git a/block/backup.c b/block/backup.c > index 11e27c844d..a4d37d2d62 100644 > --- a/block/backup.c > +++ b/block/backup.c [...] > @@ -267,6 +267,18 @@ static int coroutine_fn backup_do_cow(BackupBlockJob > *job, > wait_for_overlapping_requests(job, start, end); > cow_request_begin(&cow_request, job, start, end); > > + if (job->initializing_bitmap) { > + int64_t off, chunk; > + > + for (off = offset; offset < end; offset += chunk) { This is what I’m referring to, I think this loop should skip areas that are clean. > + ret = backup_bitmap_reset_unallocated(job, off, end - off, > &chunk); > + if (ret < 0) { > + chunk = job->cluster_size; > + } > + } > + } > + ret = 0; > + > while (start < end) { > int64_t dirty_end; > > @@ -276,15 +288,6 @@ static int coroutine_fn backup_do_cow(BackupBlockJob > *job, > continue; /* already copied */ > } > > - if (job->initializing_bitmap) { > - ret = backup_bitmap_reset_unallocated(job, start, &skip_bytes); > - if (ret == 0) { > - trace_backup_do_cow_skip_range(job, start, skip_bytes); > - start += skip_bytes; > - continue; > - } > - } > - > dirty_end = bdrv_dirty_bitmap_next_zero(job->copy_bitmap, start, > end - start); > if (dirty_end < 0) { Hm, you resolved that conflict differently from me. I decided the bdrv_dirty_bitmap_next_zero() call should go before the backup_bitmap_reset_unallocated() call so that we can then do a dirty_end = MIN(dirty_end, start + skip_bytes); because we probably don’t want to copy anything past what backup_bitmap_reset_unallocated() has inquired. But then again this patch eliminates the difference anyway... Max > @@ -546,7 +549,8 @@ static int coroutine_fn backup_run(Job *job, Error **errp) > goto out; > } > > - ret = backup_bitmap_reset_unallocated(s, offset, &count); > + ret = backup_bitmap_reset_unallocated(s, offset, s->len - offset, > + &count); > if (ret < 0) { > goto out; > } >
signature.asc
Description: OpenPGP digital signature