On 03/12, Philipp Hahn wrote:
>
> Have you seen any other corruption
No,
> or is one of your patches likely to
> fix something like the issue mentioned above:
I am not sure I even understand the problem above ;) I mean, after the quick
look I do not see how this connects to FPU. $rdi == 2 looks
On 02/15, Raghavendra K T wrote:
>
> * Raghavendra K T [2015-02-15 11:25:44]:
>
> Resending the V5 with smp_mb__after_atomic() change without bumping up
> revision
Reviewed-by: Oleg Nesterov
Of course, this needs the acks from maintainers. And I agree that SLOWPATH
in .head
Well, I regret I mentioned the lack of barrier after enter_slowpath ;)
On 02/15, Raghavendra K T wrote:
>
> @@ -46,7 +46,8 @@ static __always_inline bool static_key_false(struct
> static_key *key);
>
> static inline void __ticket_enter_slowpath(arch_spinlock_t *lock)
> {
> - set_bit(0, (vol
On 02/15, Raghavendra K T wrote:
>
> On 02/13/2015 09:02 PM, Oleg Nesterov wrote:
>
>>> @@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock
>>> *lock, __ticket_t want)
>>> * check again make sure it didn't become
On 02/13, Oleg Nesterov wrote:
>
> On 02/13, Raghavendra K T wrote:
> >
> > @@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock
> > *lock, __ticket_t want)
> > * check again make sure it didn't become free while
> > * w
On 02/13, Raghavendra K T wrote:
>
> @@ -164,7 +161,7 @@ static inline int arch_spin_is_locked(arch_spinlock_t
> *lock)
> {
> struct __raw_tickets tmp = READ_ONCE(lock->tickets);
>
> - return tmp.tail != tmp.head;
> + return tmp.tail != (tmp.head & ~TICKET_SLOWPATH_FLAG);
> }
Well
On 02/11, Jeremy Fitzhardinge wrote:
>
> On 02/11/2015 09:24 AM, Oleg Nesterov wrote:
> > I agree, and I have to admit I am not sure I fully understand why
> > unlock uses the locked add. Except we need a barrier to avoid the race
> > with the enter_slowpath() users, of cou
Damn, sorry for noise, forgot to mention...
On 02/12, Raghavendra K T wrote:
>
> +static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock,
> + __ticket_t head)
> +{
> + if (head & TICKET_SLOWPATH_FLAG) {
> + arc
On 02/12, Raghavendra K T wrote:
>
> @@ -191,8 +189,7 @@ static inline void arch_spin_unlock_wait(arch_spinlock_t
> *lock)
>* We need to check "unlocked" in a loop, tmp.head == head
>* can be false positive because of overflow.
>*/
> - if
On 02/12, Raghavendra K T wrote:
>
> @@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock
> *lock, __ticket_t want)
>* check again make sure it didn't become free while
>* we weren't looking.
>*/
> - if (ACCESS_ONCE(lock->tickets.head) == want) {
>
On 02/11, Raghavendra K T wrote:
>
> On 02/10/2015 06:56 PM, Oleg Nesterov wrote:
>
>> In this case __ticket_check_and_clear_slowpath() really needs to cmpxchg
>> the whole .head_tail. Plus obviously more boring changes. This needs a
>> separate patch even _if_ this
On 02/10, Jeremy Fitzhardinge wrote:
>
> On 02/10/2015 05:26 AM, Oleg Nesterov wrote:
> > On 02/10, Raghavendra K T wrote:
> >> Unfortunately xadd could result in head overflow as tail is high.
> >>
> >> The other option was repeated cmpxchg which is bad I be
On 02/10, Denys Vlasenko wrote:
>
> # define HEAD_MASK (TICKET_SLOWPATH_FLAG-1)
>
> ...
> unlock_again:
>
> val = xadd((&lock->ticket.head_tail, TICKET_LOCK_INC);
> if (unlikely(!(val & HEAD_MASK))) {
> /* overflow. we inadvertently incremented the tail word.
> * tail's lsb is TICKET_SLOWP
On 02/10, Raghavendra K T wrote:
>
> On 02/10/2015 06:23 AM, Linus Torvalds wrote:
>
>> add_smp(&lock->tickets.head, TICKET_LOCK_INC);
>> if (READ_ONCE(lock->tickets.tail) & TICKET_SLOWPATH_FLAG) ..
>>
>> into something like
>>
>> val = xadd((&lock->ticket.head_tail, TICK
On 02/09, Raghavendra K T wrote:
>
> +static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock)
> +{
> + arch_spinlock_t old, new;
> + __ticket_t diff;
> +
> + old.tickets = READ_ONCE(lock->tickets);
> + diff = (old.tickets.tail & ~TICKET_SLOWPATH_FLAG) - old.ticke
On 02/06, Sasha Levin wrote:
>
> Can we modify it slightly to avoid potentially accessing invalid memory:
>
> diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
> index 5315887..cd22d73 100644
> --- a/arch/x86/include/asm/spinlock.h
> +++ b/arch/x86/include/asm/spinlock
On 01/15, Christian Borntraeger wrote:
>
> Am 15.01.2015 um 20:38 schrieb Oleg Nesterov:
> > On 01/15, Christian Borntraeger wrote:
> >>
> >> --- a/arch/x86/include/asm/spinlock.h
> >> +++ b/arch/x86/include/asm/spinlock.h
> >> @@ -186,7 +186,7 @
On 01/15, Christian Borntraeger wrote:
>
> --- a/arch/x86/include/asm/spinlock.h
> +++ b/arch/x86/include/asm/spinlock.h
> @@ -186,7 +186,7 @@ static inline void arch_spin_unlock_wait(arch_spinlock_t
> *lock)
> __ticket_t head = ACCESS_ONCE(lock->tickets.head);
>
> for (;;) {
> -
18 matches
Mail list logo