On Thu, Nov 27, 2014 at 05:43:56PM +0100, Paolo Bonzini wrote:
> 
> 
> On 27/11/2014 13:29, Stefan Hajnoczi wrote:
> > +bool bitmap_test_and_clear_atomic(unsigned long *map, long start, long nr)
> > +{
> > +    unsigned long *p = map + BIT_WORD(start);
> > +    const long size = start + nr;
> > +    int bits_to_clear = BITS_PER_LONG - (start % BITS_PER_LONG);
> > +    unsigned long mask_to_clear = BITMAP_FIRST_WORD_MASK(start);
> > +    unsigned long dirty = 0;
> > +    unsigned long old_bits;
> > +
> > +    while (nr - bits_to_clear >= 0) {
> > +        old_bits = atomic_fetch_and(p, ~mask_to_clear);
> > +        dirty |= old_bits & mask_to_clear;
> > +        nr -= bits_to_clear;
> > +        bits_to_clear = BITS_PER_LONG;
> > +        mask_to_clear = ~0UL;
> > +        p++;
> > +    }
> > +    if (nr) {
> > +        mask_to_clear &= BITMAP_LAST_WORD_MASK(size);
> > +        old_bits = atomic_fetch_and(p, ~mask_to_clear);
> > +        dirty |= old_bits & mask_to_clear;
> > +    }
> > +
> > +    return dirty;
> > +}
> 
> Same here; you can use atomic_xchg, which is faster because on x86
> atomic_fetch_and must do a compare-and-swap loop.

Will fix in v2.

Attachment: pgpGJHZ8wueHE.pgp
Description: PGP signature

Reply via email to