of patches works fine.
Feel free to add
Tested-by: Raghavendra K T #kvm pv
As far as performance is concerned (with my 16core +ht machine having
16vcpu guests [ even w/ , w/o the lfsr hash patchset ]), I do not see
any significant observations to report, though I understand that we
could see much
On 03/20/2015 02:38 AM, Waiman Long wrote:
On 03/19/2015 06:01 AM, Peter Zijlstra wrote:
[...]
You are probably right. The initial apply_paravirt() was done before the
SMP boot. Subsequent ones were at kernel module load time. I put a
counter in the __native_queue_spin_unlock() and it registere
On 02/24/2015 08:50 PM, Greg KH wrote:
On Tue, Feb 24, 2015 at 03:47:37PM +0100, Ingo Molnar wrote:
* Greg KH wrote:
On Tue, Feb 24, 2015 at 02:54:59PM +0530, Raghavendra K T wrote:
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does
On 02/24/2015 08:17 PM, Ingo Molnar wrote:
* Greg KH wrote:
On Tue, Feb 24, 2015 at 02:54:59PM +0530, Raghavendra K T wrote:
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does:
prev = *lock;
add_smp(&
On 02/16/2015 10:17 PM, David Vrabel wrote:
On 15/02/15 17:30, Raghavendra K T wrote:
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -41,7 +41,7 @@ static u8 zero_stats;
static inline void check_zero(void)
{
u8 ret;
- u8 old = ACCESS_ONCE(zero_stats
On 02/15/2015 09:47 PM, Oleg Nesterov wrote:
Well, I regret I mentioned the lack of barrier after enter_slowpath ;)
On 02/15, Raghavendra K T wrote:
@@ -46,7 +46,8 @@ static __always_inline bool static_key_false(struct
static_key *key);
static inline void __ticket_enter_slowpath
* Raghavendra K T [2015-02-15 11:25:44]:
Resending the V5 with smp_mb__after_atomic() change without bumping up
revision
---8<---
>From 0b9ecde30e3bf5b5b24009fd2ac5fc7ac4b81158 Mon Sep 17 00:00:00 2001
From: Raghavendra K T
Date: Fri, 6 Feb 2015 16:44:11 +0530
Subject: [PATCH RESEND V
On 02/15/2015 11:25 AM, Raghavendra K T wrote:
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does:
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a
0.02
dbench 1x -1.77
dbench 2x -0.63
[Jeremy: hinted missing TICKET_LOCK_INC for kick]
[Oleg: Moving slowpath flag to head, ticket_equals idea]
[PeterZ: Detailed changelog]
Reported-by: Sasha Levin
Suggested-by: Linus Torvalds
Signed-off-by: Raghavendra K
On 02/13/2015 09:02 PM, Oleg Nesterov wrote:
On 02/13, Raghavendra K T wrote:
@@ -164,7 +161,7 @@ static inline int arch_spin_is_locked(arch_spinlock_t *lock)
{
struct __raw_tickets tmp = READ_ONCE(lock->tickets);
- return tmp.tail != tmp.head;
+ return tmp.t
0.02
dbench 1x -1.77
dbench 2x -0.63
[Jeremy: hinted missing TICKET_LOCK_INC for kick]
[Oleg: Moving slowpath flag to head, ticket_equals idea]
[PeterZ: Detailed changelog]
Reported-by: Sasha Levin
Suggested-by: Linus Torvalds
Signed-off-by: Raghavendra K
On 02/12/2015 08:30 PM, Peter Zijlstra wrote:
On Thu, Feb 12, 2015 at 05:17:27PM +0530, Raghavendra K T wrote:
[...]
Linus suggested that we should not do any writes to lock after unlock(),
and we can move slowpath clearing to fastpath lock.
So this patch implements the fix with:
1. Moving
On 02/12/2015 07:32 PM, Oleg Nesterov wrote:
Damn, sorry for noise, forgot to mention...
On 02/12, Raghavendra K T wrote:
+static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock,
+ __ticket_t head)
+{
+ if (head
On 02/12/2015 07:20 PM, Oleg Nesterov wrote:
On 02/12, Raghavendra K T wrote:
@@ -191,8 +189,7 @@ static inline void arch_spin_unlock_wait(arch_spinlock_t
*lock)
* We need to check "unlocked" in a loop, tmp.head == head
* can be false positive
On 02/12/2015 07:07 PM, Oleg Nesterov wrote:
On 02/12, Raghavendra K T wrote:
@@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock
*lock, __ticket_t want)
* check again make sure it didn't become free while
* we weren't looking.
*/
x -0.63
[Jeremy: hinted missing TICKET_LOCK_INC for kick]
[Oleg: Moving slowpath flag to head, ticket_equals idea]
Reported-by: Sasha Levin
Suggested-by: Linus Torvalds
Signed-off-by: Raghavendra K T
---
arch/x86/include/asm/spinlock.h | 87 -
ar
On 02/11/2015 11:08 PM, Oleg Nesterov wrote:
On 02/11, Raghavendra K T wrote:
On 02/10/2015 06:56 PM, Oleg Nesterov wrote:
In this case __ticket_check_and_clear_slowpath() really needs to cmpxchg
the whole .head_tail. Plus obviously more boring changes. This needs a
separate patch even _if_
On 02/10/2015 06:56 PM, Oleg Nesterov wrote:
On 02/10, Raghavendra K T wrote:
On 02/10/2015 06:23 AM, Linus Torvalds wrote:
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
if (READ_ONCE(lock->tickets.tail) & TICKET_SLOWPATH_FLAG) ..
into something like
On 02/10/2015 06:23 AM, Linus Torvalds wrote:
On Mon, Feb 9, 2015 at 4:02 AM, Peter Zijlstra wrote:
On Mon, Feb 09, 2015 at 03:04:22PM +0530, Raghavendra K T wrote:
So we have 3 choices,
1. xadd
2. continue with current approach.
3. a read before unlock and also after that.
For the truly
Ccing Davidlohr, (sorry that I got confused with similar address in cc
list).
On 02/09/2015 08:44 PM, Oleg Nesterov wrote:
On 02/09, Raghavendra K T wrote:
+static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock)
+{
+ arch_spinlock_t old, new;
+ __ticket_t diff
ll
could be set when somebody does arch_trylock. Handle that too by ignoring
slowpath flag during lock availability check.
[Jeremy: hinted missing TICKET_LOCK_INC for kick]
Reported-by: Sasha Levin
Suggested-by: Linus Torvalds
Signed-off-by: Raghavendra K T
---
ar
On 02/09/2015 05:32 PM, Peter Zijlstra wrote:
On Mon, Feb 09, 2015 at 03:04:22PM +0530, Raghavendra K T wrote:
So we have 3 choices,
1. xadd
2. continue with current approach.
3. a read before unlock and also after that.
For the truly paranoid we have probe_kernel_address(), suppose the lock
On 02/09/2015 02:44 AM, Jeremy Fitzhardinge wrote:
On 02/06/2015 06:49 AM, Raghavendra K T wrote:
[...]
Linus suggested that we should not do any writes to lock after unlock(),
and we can move slowpath clearing to fastpath lock.
Yep, that seems like a sound approach.
Current approach
On 02/07/2015 12:27 AM, Sasha Levin wrote:
On 02/06/2015 09:49 AM, Raghavendra K T wrote:
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does:
prev = *lock;
add_smp(&lock->tickets.head, TICKET_L
On 02/06/2015 09:55 PM, Linus Torvalds wrote:
On Fri, Feb 6, 2015 at 6:49 AM, Raghavendra K T
wrote:
Paravirt spinlock clears slowpath flag after doing unlock.
[ fix edited out ]
So I'm not going to be applying this for 3.19, because it's much too
late and the patch is too scary
ll
could be set when somebody does arch_trylock. Handle that too by ignoring
slowpath flag during lock availability check.
Reported-by: Sasha Levin
Suggested-by: Linus Torvalds
Signed-off-by: Raghavendra K T
---
arch/x86/include/asm/spinlock.h | 70 -
1 file chang
On 01/21/2015 01:42 AM, Waiman Long wrote:
This patch renames the paravirt_ticketlocks_enabled static key to a
more generic paravirt_spinlocks_enabled name.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra
---
Reviewed-by: Raghavendra K T
27 matches
Mail list logo