Am 17.05.2016 um 11:15 hat Denis V. Lunev geschrieben:
> We should split requests even if they are less than write_zeroes_alignment.
> For example we can have the following request:
>   offset 62k
>   size   4k
>   write_zeroes_alignment 64k
> The original code sent 1 request covering 2 qcow2 clusters, and resulted
> in both clusters being allocated. But by splitting the request, we can
> cater to the case where one of the two clusters can be zeroed as a
> whole, for only 1 cluster allocated after the operation.
> 
> Signed-off-by: Denis V. Lunev <d...@openvz.org>
> CC: Eric Blake <ebl...@redhat.com>
> CC: Kevin Wolf <kw...@redhat.com>
> ---
>  block/io.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/block/io.c b/block/io.c
> index cd6d71a..6a24ea8 100644
> --- a/block/io.c
> +++ b/block/io.c
> @@ -1172,13 +1172,13 @@ static int coroutine_fn 
> bdrv_co_do_write_zeroes(BlockDriverState *bs,
>          /* Align request.  Block drivers can expect the "bulk" of the request
>           * to be aligned.
>           */
> -        if (bs->bl.write_zeroes_alignment
> -            && num > bs->bl.write_zeroes_alignment) {
> +        if (bs->bl.write_zeroes_alignment) {
>              if (sector_num % bs->bl.write_zeroes_alignment != 0) {
>                  /* Make a small request up to the first aligned sector.  */
>                  num = bs->bl.write_zeroes_alignment;
>                  num -= sector_num % bs->bl.write_zeroes_alignment;

Turns out this doesn't work. If this is a small request that zeros
something in the middle of a single cluster (i.e. we have untouched data
both before and after the request in the same cluster), then num can now
become greater than nb_sectors, so that we end up zeroing too much.

I'll send a test case that catches this and unstage the series for the
time being.

Kevin

Reply via email to