On Thu, Apr 17, 2025 at 01:39:12PM -0500, Eric Blake wrote:
> When doing a sync=full mirroring, QMP drive-mirror requests full
> zeroing if it did not just create the destination, and blockdev-mirror
> requests full zeroing unconditionally.  This is because during a full
> sync, we must ensure that the portions of the disk that are not
> otherwise touched by the source still read as zero upon completion.
> 
> However, in mirror_dirty_init(), we were blindly assuming that if the
> destination allows punching holes, we should pre-zero the entire
> image; and if it does not allow punching holes, then treat the entire
> source as dirty rather than mirroring just the allocated portions of
> the source.  Without the ability to punch holes, this results in the
> destination file being fully allocated; and even when punching holes
> is supported, it causes duplicate I/O to the portions of the
> destination corresponding to chunks of the source that are allocated
> but read as zero.
> 
> Smarter is to avoid the pre-zeroing pass over the destination if it
> can be proved the destination already reads as zero.  Note that a
> later patch will then further improve things to skip writing to the
> destination for parts of the image where the source is zero; but even
> with just this patch, it is possible to see a difference for any BDS
> that can quickly report that it already reads as zero.

Hmm.  When the destination reads as all zeroes, but is not (yet)
sparse, and the user has opened the destination image with
"discard":"unmap" and "detect-zeroes":"unmap", then pre-patch this
would sparsify the destination, but post-patch it leaves the
destination allocated.

When "detect-zeroes" is at its default of 'off', or even at 'on'
(which says optimize zero writes, but don't worry about punching
holes), that's not a problem.  But when "detect-zeroes" is at 'unamp',
this is a regression in behavior.  I'll see if I can quickly adjust
that in v3.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.
Virtualization:  qemu.org | libguestfs.org


Reply via email to