On 05/31/2016 03:00 PM, Denis V. Lunev wrote: > On 05/31/2016 09:42 PM, Eric Blake wrote: >> On 05/30/2016 06:58 AM, Pavel Butsykin wrote: >> >>> Sorry, but it seems this will never happen, because the second write >>> will not pass this check: >>> >>> uint64_t qcow2_alloc_compressed_cluster_offset(BlockDriverState *bs, >>> uint64_t offset, >>> int compressed_size) >>> { >>> ... >>> /* Compression can't overwrite anything. Fail if the cluster was >>> already >>> * allocated. */ >>> cluster_offset = be64_to_cpu(l2_table[l2_index]); >>> if (cluster_offset & L2E_OFFSET_MASK) { >>> qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table); >>> return 0; >>> } >>> ... >>> >>> As you can see we can't do the compressed write in the already allocated >>> cluster. >> Umm, doesn't that defeat the point of compression, if every compressed >> cluster becomes the head of a new cluster? The whole goal of >> compression is to be able to fit multiple clusters within one. >> > AFAIK the file will be sparse in that unused areas
IIRC, on the NTFS file system, the minimum hole size is 64k. If you also have 64k clusters, then you don't have a sparse file - every tail of zero sectors will be explicit in the filesystem, if you are using 1:1 clusters for compression. Other file systems may have finer granularity for holes, but it's still rather awkward to be relying on sparseness when a better solution is to pack compressed sectors consecutively. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org
signature.asc
Description: OpenPGP digital signature