On 1/24/19 8:17 AM, Kevin Wolf wrote:
> Depending on the exact image layout and the storage backend (tmpfs is
> konwn to have very slow SEEK_HOLE/SEEK_DATA), caching lseek results can
> save us a lot of time e.g. during a mirror block job or qemu-img convert
> with a fragmented source image (.bdrv_co_block_status on the protocol
> layer can be called for every single cluster in the extreme case).
> 
> We may only cache data regions because of possible concurrent writers.
> This means that we can later treat a recently punched hole as data, but
> this is safe. We can't cache holes because then we might treat recently
> written data as holes, which can cause corruption.

gluster copies heavily from file-posix's implementation; should it also
copy this cache of known-data?  Should NBD also cache known-data when
NBD_CMD_BLOCK_STATUS is available?

> 
> Signed-off-by: Kevin Wolf <kw...@redhat.com>
> ---
>  block/file-posix.c | 51 ++++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 49 insertions(+), 2 deletions(-)
> 

>  
> +    /* Invalidate seek_data_cache if it overlaps */
> +    sdc = &s->seek_data_cache;
> +    if (sdc->valid && !(sdc->end < aiocb->aio_offset ||
> +                        sdc->start > aiocb->aio_offset + aiocb->aio_nbytes))
> +    {
> +        sdc->valid = false;
> +    }

Worth a helper function for this repeated action?

Reviewed-by: Eric Blake <ebl...@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to