On 28.10.19 07:33, Tuguoyi wrote: > In check_constraints_on_bitmap(), the sanity check on the > granularity will cause uint64_t integer left-shift overflow > when cluster_size is 2M and the granularity is BIGGER than > 32K. As a result, for a qcow2 disk with cluster_size set to > 2M, we could not even create a dirty bitmap with default > granularity. This patch fix the issue by dividing @len by > granularity instead. > > Signed-off-by: Guoyi Tu <tu.gu...@h3c.com> > --- > block/qcow2-bitmap.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/block/qcow2-bitmap.c b/block/qcow2-bitmap.c > index 98294a7..71ac822 100644 > --- a/block/qcow2-bitmap.c > +++ b/block/qcow2-bitmap.c > @@ -172,8 +172,8 @@ static int check_constraints_on_bitmap(BlockDriverState > *bs, > } > > if ((len > (uint64_t)BME_MAX_PHYS_SIZE << granularity_bits) || > - (len > (uint64_t)BME_MAX_TABLE_SIZE * s->cluster_size << > - granularity_bits)) > + (DIV_ROUND_UP(len, granularity) > (uint64_t)BME_MAX_TABLE_SIZE * > + s->cluster_size))
This didn’t change because of this patch, but doesn’t this comparison need a conversion of bits to bytes somewhere? len / granularity gives us the number of bits needed for the bitmap. BME_MAX_TABLE_SIZE is, as far as I can see, a number of bitmap clusters, so multiplying it by the cluster size gives the number of bytes in the bitmap. But the number of bits is eight times higher. Another topic: Isn’t BME_MAX_TABLE_SIZE too big? As it is, bitmap tables can have a size of 1 GB, and that’s the table alone. Depending on the cluster size, the bitmap would take up at least 64 GB and cover at least 32 TB (at a granularity of 512 bytes). Max > { > error_setg(errp, "Too much space will be occupied by the bitmap. " > "Use larger granularity"); >
signature.asc
Description: OpenPGP digital signature