Am 24.01.2019 um 15:40 hat Vladimir Sementsov-Ogievskiy geschrieben: > 24.01.2019 17:17, Kevin Wolf wrote: > > Depending on the exact image layout and the storage backend (tmpfs is > > konwn to have very slow SEEK_HOLE/SEEK_DATA), caching lseek results can > > save us a lot of time e.g. during a mirror block job or qemu-img convert > > with a fragmented source image (.bdrv_co_block_status on the protocol > > layer can be called for every single cluster in the extreme case). > > > > We may only cache data regions because of possible concurrent writers. > > This means that we can later treat a recently punched hole as data, but > > this is safe. We can't cache holes because then we might treat recently > > written data as holes, which can cause corruption. > > > > Signed-off-by: Kevin Wolf <kw...@redhat.com> > > --- > > block/file-posix.c | 51 ++++++++++++++++++++++++++++++++++++++++++++-- > > 1 file changed, 49 insertions(+), 2 deletions(-) > > > > diff --git a/block/file-posix.c b/block/file-posix.c > > index 8aee7a3fb8..7272c7c99d 100644 > > --- a/block/file-posix.c > > +++ b/block/file-posix.c > > @@ -168,6 +168,12 @@ typedef struct BDRVRawState { > > bool needs_alignment; > > bool check_cache_dropped; > > > > + struct seek_data_cache { > > + bool valid; > > + uint64_t start; > > + uint64_t end; > > + } seek_data_cache; > > Should we have some mutex-locking to protect it?
It is protected by the AioContext lock, like everything else in BDRVRawState. Kevin