On Tue, Jun 15, 2021 at 01:29:58AM +0300, Nir Soffer wrote: > oVirt started to use now qemu:allocation-depth meta context. > In the past, we use base:allocation and reported NBD_STATE_HOLE > as a hole, and this broke in qemu 6.0.0. > > Now we have NBD_STATE_HOLE from base:allocation, and flags == 0 > from qemu:allocation-depth. > > We map flags == 0 to EXTENT_BACKING (1<<3), and merge with the > flags from base:allocation. > > EXTENT_BACKING is internal bit not exposed to users. It matches the > backing=true proposed for "qemu-img map", and the BACKING_FILE > bit used qemu-img convert. > > We report a hole if: > > flags & NBD_STATE_HOLE and flags & EXTENT_BACKING > > But is it really needed to consider NBD_STATE_HOLE?
No. By definition, if qemu encounters a backing chain where the final node refers to a (non-existent) backing node, the data that it provides will be listed as unallocated and reads-as-zero. So any time EXTENT_BACKING is set (because qemu:allocation-depth reported zero over NBD, or because an updated qemu-img map reported "backing":true or whatever we name it), you are guaranteed that you will also see that extent as a hole. But the converse is not true, seeing a hole does not guarantee it is unallocated in the backing chain. > > Looking in nbd/server.c, when depth is reported as 0, we always > get NBD_STATE_HOLE: > > flags = (ret & BDRV_BLOCK_DATA ? 0 : NBD_STATE_HOLE) | > (ret & BDRV_BLOCK_ZERO ? NBD_STATE_ZERO : 0); > > Looks like we should use only qemu:allocation-depth to report holes, > and ignore NBD_STATE_HOLE. Maybe later we can use NBD_STATE_HOLE > to report sparseness (e.g. "allocated": True). NBD_STATE_HOLE is designed to tell you whether the qcow2 file was preallocated or not. It is set when qemu knows that writing to that portion of the disk will require allocation of more storage. A portion of the qcow2 file that defers to the backing chain (and is not found anywhere in the chain) necessarily requires allocation. Knowing where the holes are does not impact your ability to recreate a qcow2 chain, but may impact how much disk space you use in your recreated chain compared to how much disk space was used in the original. > > What do you think? > > Nir > -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3266 Virtualization: qemu.org | libvirt.org