At 14:13 +0100 on 11 May (1431353629), David Vrabel wrote:
> On 08/05/15 10:36, Jan Beulich wrote:
> >>
> >> +}
> >> +}
> >> smp_mb();
> >> }
> >
> > The old code had smp_mb() before _and_ after the check - is it really
> > correct to drop the one before (or effectively replace i
On 08/05/15 10:36, Jan Beulich wrote:
>>
>> +}
>> +}
>> smp_mb();
>> }
>
> The old code had smp_mb() before _and_ after the check - is it really
> correct to drop the one before (or effectively replace it by smp_rmb()
> in observe_{lock,head}())?
Typical usage is:
d->is_dyi
>>> On 30.04.15 at 17:33, wrote:
> int _spin_trylock(spinlock_t *lock)
> {
> +spinlock_tickets_t old, new;
> +
> check_lock(&lock->debug);
> -if ( !_raw_spin_trylock(&lock->raw) )
> +old = observe_lock(&lock->tickets);
> +if ( old.head != old.tail )
> +return 0;
> +
On 05/05/15 14:56, Ian Campbell wrote:
> On Thu, 2015-04-30 at 16:33 +0100, David Vrabel wrote:
>>
>> void _spin_lock_irq(spinlock_t *lock)
>> {
>> -LOCK_PROFILE_VAR;
>> -
>> ASSERT(local_irq_is_enabled());
>> local_irq_disable();
>> -check_lock(&lock->debug);
>> -while (
On Thu, 2015-04-30 at 16:33 +0100, David Vrabel wrote:
>
> void _spin_lock_irq(spinlock_t *lock)
> {
> -LOCK_PROFILE_VAR;
> -
> ASSERT(local_irq_is_enabled());
> local_irq_disable();
> -check_lock(&lock->debug);
> -while ( unlikely(!_raw_spin_trylock(&lock->raw)) )
> -
Replace the byte locks with ticket locks. Ticket locks are: a) fair;
and b) peform better when contented since they spin without an atomic
operation.
The lock is split into two ticket values: head and tail. A locker
acquires a ticket by (atomically) increasing tail and using the
previous tail va