On Thu, Feb 14, 2013 at 03:48:51PM +0100, Stefan Hajnoczi wrote: > On Wed, Feb 13, 2013 at 02:21:53PM +0100, Kevin Wolf wrote: > > Signed-off-by: Kevin Wolf <kw...@redhat.com> > > --- > > block/qcow2-cluster.c | 40 ++++++++++++++++++++++++++++------------ > > block/qcow2.h | 11 +++++++++++ > > 2 files changed, 39 insertions(+), 12 deletions(-) > > > > diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c > > index 0e804ba..a3b2447 100644 > > --- a/block/qcow2-cluster.c > > +++ b/block/qcow2-cluster.c > > @@ -756,31 +756,41 @@ out: > > * Check if there already is an AIO write request in flight which allocates > > * the same cluster. In this case we need to wait until the previous > > * request has completed and updated the L2 table accordingly. > > + * > > + * Returns: > > + * 0 if there was no dependency. *cur_bytes indicates the number of > > + * bytes from guest_offset that can be read before the next > > + * dependency must be processed (or the request is complete) > > s/read/accessed/ > > IMO "read" is too specific, the doc comment doesn't need to contain > knowledge of what the caller will do at guest_offset.
Right. In fact, "read" is even wrong, if anything it should be "written to". > > + * > > + * -EAGAIN if we had to wait for another request, previously gathered > > + * information on cluster allocation may be invalid now. The > > caller > > + * must start over anyway, so consider *cur_bytes undefined. > > */ > > static int handle_dependencies(BlockDriverState *bs, uint64_t guest_offset, > > - unsigned int *nb_clusters) > > + uint64_t *cur_bytes) > > { > > BDRVQcowState *s = bs->opaque; > > QCowL2Meta *old_alloc; > > + uint64_t bytes = *cur_bytes; > > > > QLIST_FOREACH(old_alloc, &s->cluster_allocs, next_in_flight) { > > > > - uint64_t start = guest_offset >> s->cluster_bits; > > - uint64_t end = start + *nb_clusters; > > - uint64_t old_start = old_alloc->offset >> s->cluster_bits; > > - uint64_t old_end = old_start + old_alloc->nb_clusters; > > + uint64_t start = guest_offset; > > I'm not sure this is safe. Previously the function caught write > requests which overlap at cluster granularity, now it will allow them? At this point in the series, old_start and old_end are still aligned to cluster boundaries. So the cases in which an overlap is detected stay exactly the same with this patch. This is just a more precise description of what we really wanted to compare. Kevin