On Sun, 02/23 20:10, Peter Lieven wrote:
> Am 22.02.2014 17:45, schrieb Fam Zheng:
> > On Sat, 02/22 14:00, Peter Lieven wrote:
> >> this patch tries to optimize zero write requests
> >> by automatically using bdrv_write_zeroes if it is
> >> supported by the format.
> >>
> >> i know that there is a lot of potential for discussion, but i would
> >> like to know what the others think.
> >>
> >> this should significantly speed up file system initialization and
> >> should speed zero write test used to test backend storage performance.
> >>
> >> the difference can simply be tested by e.g.
> >>
> >> dd if=/dev/zero of=/dev/vdX bs=1M
> >>
> >> Signed-off-by: Peter Lieven <p...@kamp.de>
> >> ---
> > With this patch, is is still possible to actually do zero fill? Prefill is
> > usually writing zeroes too, but according to the semantic, bdrv_write_zeroes
> > may just set L2 entry flag without allocating clusters, which won't satisfy
> > that.
> Can you specify which operation you exactly mean? I don't think that
> there is a problem, but maybe it would be better to add a check for
> bs->file != NULL so the optimization takes only place for the format
> not for the protocol.
> 

Previously, users can do

dd if=/dev/zero of=/dev/vdX

to force backend allocation and mapping. This is meaningful for later IO
performance, but how long the dd takes doesn't matter as much, since it is a
one time shot.

The same in your test case: yes, mkfs time may be improved, but it comes with
tradeoff with later IO's slowness: when the real user data comes, the allocation
still needs to be done.

I would do this in qcow2:

 1. In qcow2_co_writev, allocate cluster regardless of if data is zero or not.
 2. If data is zero, set QCOW2_OFLAG_ZERO in L2.

Fam

Reply via email to