On 27/11/2014 13:29, Stefan Hajnoczi wrote:
> +bool bitmap_test_and_clear_atomic(unsigned long *map, long start, long nr)
> +{
> +    unsigned long *p = map + BIT_WORD(start);
> +    const long size = start + nr;
> +    int bits_to_clear = BITS_PER_LONG - (start % BITS_PER_LONG);
> +    unsigned long mask_to_clear = BITMAP_FIRST_WORD_MASK(start);
> +    unsigned long dirty = 0;
> +    unsigned long old_bits;
> +
> +    while (nr - bits_to_clear >= 0) {
> +        old_bits = atomic_fetch_and(p, ~mask_to_clear);
> +        dirty |= old_bits & mask_to_clear;
> +        nr -= bits_to_clear;
> +        bits_to_clear = BITS_PER_LONG;
> +        mask_to_clear = ~0UL;
> +        p++;
> +    }
> +    if (nr) {
> +        mask_to_clear &= BITMAP_LAST_WORD_MASK(size);
> +        old_bits = atomic_fetch_and(p, ~mask_to_clear);
> +        dirty |= old_bits & mask_to_clear;
> +    }
> +
> +    return dirty;
> +}

Same here; you can use atomic_xchg, which is faster because on x86
atomic_fetch_and must do a compare-and-swap loop.

Paolo

Reply via email to