r Zijlstra (Intel)
Signed-off-by: Waiman Long
---
include/asm-generic/qspinlock_types.h | 12 +++-
kernel/locking/qspinlock.c| 119 +++--
2 files changed, 107 insertions(+), 24 deletions(-)
diff --git a/include/asm-generic/qspinlock_types.h
b/include/a
optimization which will make the queue spinlock code perform
better than the generic implementation.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra (Intel)
---
arch/x86/Kconfig |1 +
arch/x86/include/asm/qspinlock.h | 20
arch/x86/include
k: Add pending bit
qspinlock: Optimize for smaller NR_CPUS
qspinlock: Revert to test-and-set on hypervisors
pvqspinlock, x86: Implement the paravirt qspinlock call patching
Waiman Long (9):
qspinlock: A simple generic 4-byte queue spinlock
qspinlock, x86: Enable x86-64 to use queue spinl
lock is acquired, the queue node can be released to
be used later.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra (Intel)
---
include/asm-generic/qspinlock.h | 132 +
include/asm-generic/qspinlock_types.h | 58 +
kernel/Kconfig.locks
From: Peter Zijlstra (Intel)
When we detect a hypervisor (!paravirt, see qspinlock paravirt support
patches), revert to a simple test-and-set lock to avoid the horrors
of queue preemption.
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Waiman Long
---
arch/x86/include/asm/qspinlock.h
locked bit
into a new clear_pending_set_locked() function.
This patch also simplifies the trylock operation before queuing by
calling queue_spin_trylock() directly.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra (Intel)
---
include/asm-generic/qspinlock_types.h |2 +
kernel
linear feedback shift register.
Signed-off-by: Waiman Long
---
kernel/locking/qspinlock.c | 68 +++-
kernel/locking/qspinlock_paravirt.h | 324 +++
2 files changed, 391 insertions(+), 1 deletions(-)
create mode 100644 kernel/locking
is needed to make the qspinlock achieve performance
parity with ticket spinlock at light load.
All this is horribly broken on Alpha pre EV56 (and any other arch that
cannot do single-copy atomic byte stores).
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Waiman Long
---
include/asm-gene
imeUsr Time
-- -
ticketlock 2075 10.00 216.35 3.49
qspinlock 3023 10.00 198.20 4.80
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra (Intel)
---
kernel/locking/qsp
This patch adds the necessary KVM specific code to allow KVM to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Signed-off-by: Waiman Long
---
arch/x86/kernel/kvm.c | 43 +++
kernel/Kconfig.locks |2 +-
2 files
significantly lowers the overhead of having
CONFIG_PARAVIRT_SPINLOCKS enabled, even for native code.
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Waiman Long
---
arch/x86/Kconfig |2 +-
arch/x86/include/asm/paravirt.h | 29 +++
From: David Vrabel
This patch adds the necessary Xen specific code to allow Xen to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Signed-off-by: David Vrabel
Signed-off-by: Waiman Long
---
arch/x86/xen/spinlock.c | 64
fs under the
pv-qspinlock directory.
Signed-off-by: Waiman Long
---
kernel/locking/qspinlock_paravirt.h | 100 ++-
1 files changed, 98 insertions(+), 2 deletions(-)
diff --git a/kernel/locking/qspinlock_paravirt.h
b/kernel/locking/qspinlock_paravirt.h
index
()
so as to do the pv_kick() only if it is really necessary.
Signed-off-by: Waiman Long
---
kernel/locking/qspinlock.c | 10 ++--
kernel/locking/qspinlock_paravirt.h | 76 +-
2 files changed, 61 insertions(+), 25 deletions(-)
diff --git a/kernel/locking
without using atomic op.
Signed-off-by: Waiman Long
---
kernel/locking/qspinlock_paravirt.h | 28 +---
1 files changed, 25 insertions(+), 3 deletions(-)
diff --git a/kernel/locking/qspinlock_paravirt.h
b/kernel/locking/qspinlock_paravirt.h
index 9b4ac3d..41ee033 100644
On 04/13/2015 11:09 AM, Peter Zijlstra wrote:
On Thu, Apr 09, 2015 at 05:41:44PM -0400, Waiman Long wrote:
+__visible void __pv_queue_spin_unlock(struct qspinlock *lock)
+{
+ struct __qspinlock *l = (void *)lock;
+ struct pv_node *node;
+
+ if (likely(cmpxchg(&l->
On 04/13/2015 11:08 AM, Peter Zijlstra wrote:
On Thu, Apr 09, 2015 at 05:41:44PM -0400, Waiman Long wrote:
+static void pv_wait_head(struct qspinlock *lock, struct mcs_spinlock *node)
+{
+ struct __qspinlock *l = (void *)lock;
+ struct qspinlock **lp = NULL;
+ struct pv_node
On 04/13/2015 10:47 AM, Peter Zijlstra wrote:
On Thu, Apr 09, 2015 at 05:41:44PM -0400, Waiman Long wrote:
+void __init __pv_init_lock_hash(void)
+{
+ int pv_hash_size = 4 * num_possible_cpus();
+
+ if (pv_hash_size< (1U<< LFSR_MIN_BITS))
+ pv_hash_s
On 04/09/2015 02:23 PM, Peter Zijlstra wrote:
On Thu, Apr 09, 2015 at 08:13:27PM +0200, Peter Zijlstra wrote:
On Mon, Apr 06, 2015 at 10:55:44PM -0400, Waiman Long wrote:
+#define PV_HB_PER_LINE (SMP_CACHE_BYTES / sizeof(struct pv_hash_bucket))
+static struct qspinlock **pv_hash(struct
mance benefit of qspinlock versus
ticket spinlock which got reduced in VM3 due to the overhead of
constant vCPUs halting and kicking.
Signed-off-by: Waiman Long
---
arch/x86/include/asm/qspinlock.h | 15 +--
kernel/locking/qspinlock.c| 94 +--
kernel/locking/qspinlock_unf
On 04/08/2015 08:01 AM, David Vrabel wrote:
On 07/04/15 03:55, Waiman Long wrote:
This patch adds the necessary Xen specific code to allow Xen to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
This basically looks the same as the version I wrote, except I
lock is acquired, the queue node can be released to
be used later.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra (Intel)
---
include/asm-generic/qspinlock.h | 132 +
include/asm-generic/qspinlock_types.h | 58 +
kernel/Kconfig.locks
ntel) (4):
qspinlock: Add pending bit
qspinlock: Optimize for smaller NR_CPUS
qspinlock: Revert to test-and-set on hypervisors
pvqspinlock: Implement the paravirt qspinlock for x86
Waiman Long (11):
qspinlock: A simple generic 4-byte queue spinlock
qspinlock, x86: Enable x86-64 to use queue
value 0 in a somewhat random fashion depending
on the LFSR taps that is being used. Callers can provide their own
taps value or use the default.
Signed-off-by: Waiman Long
---
include/linux/lfsr.h | 80 ++
1 files changed, 80 insertions(+), 0
locked bit
into a new clear_pending_set_locked() function.
This patch also simplifies the trylock operation before queuing by
calling queue_spin_trylock() directly.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra (Intel)
---
include/asm-generic/qspinlock_types.h |2 +
kernel
r Zijlstra (Intel)
Signed-off-by: Waiman Long
---
include/asm-generic/qspinlock_types.h | 12 +++-
kernel/locking/qspinlock.c| 119 +++--
2 files changed, 107 insertions(+), 24 deletions(-)
diff --git a/include/asm-generic/qspinlock_types.h
b/include/a
is needed to make the qspinlock achieve performance
parity with ticket spinlock at light load.
All this is horribly broken on Alpha pre EV56 (and any other arch that
cannot do single-copy atomic byte stores).
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Waiman Long
---
include/asm-gene
linear feedback shift register.
Signed-off-by: Waiman Long
---
kernel/locking/qspinlock.c | 69 -
kernel/locking/qspinlock_paravirt.h | 321 +++
2 files changed, 389 insertions(+), 1 deletions(-)
create mode 100644 kernel/locking
From: Peter Zijlstra (Intel)
When we detect a hypervisor (!paravirt, see qspinlock paravirt support
patches), revert to a simple test-and-set lock to avoid the horrors
of queue preemption.
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Waiman Long
---
arch/x86/include/asm/qspinlock.h
vCPU state (vcpu_hashed) which enables the code
to delay CPU kicking until at unlock time. Once this state is set,
the new lock holder will set _Q_SLOW_VAL and fill in the hash table
on behalf of the halted queue head vCPU.
Signed-off-by: Waiman Long
---
kernel/locking/qspinlock.c
to do that which
will only be enabled if CONFIG_DEBUG_SPINLOCK is defined because of
the performance overhead it introduces.
Signed-off-by: Waiman Long
---
kernel/locking/qspinlock_paravirt.h | 58 +++
1 files changed, 58 insertions(+), 0 deletions(-)
diff --
significantly lowers the overhead of having
CONFIG_PARAVIRT_SPINLOCKS enabled, even for native code.
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Waiman Long
---
arch/x86/Kconfig |2 +-
arch/x86/include/asm/paravirt.h | 28 +++-
This patch adds the necessary Xen specific code to allow Xen to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Signed-off-by: Waiman Long
---
arch/x86/xen/spinlock.c | 63 ---
kernel/Kconfig.locks|2
imeUsr Time
-- -
ticketlock 2075 10.00 216.35 3.49
qspinlock 3023 10.00 198.20 4.80
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra (Intel)
---
kernel/locking/qsp
This patch adds the necessary KVM specific code to allow KVM to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Signed-off-by: Waiman Long
---
arch/x86/kernel/kvm.c | 43 +++
kernel/Kconfig.locks |2 +-
2 files
optimization which will make the queue spinlock code perform
better than the generic implementation.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra (Intel)
---
arch/x86/Kconfig |1 +
arch/x86/include/asm/qspinlock.h | 20
arch/x86/include
without using atomic op.
Signed-off-by: Waiman Long
---
kernel/locking/qspinlock_paravirt.h | 28 +---
1 files changed, 25 insertions(+), 3 deletions(-)
diff --git a/kernel/locking/qspinlock_paravirt.h
b/kernel/locking/qspinlock_paravirt.h
index a210061..a9fe10d 100644
On 04/02/2015 03:48 PM, Peter Zijlstra wrote:
On Thu, Apr 02, 2015 at 07:20:57PM +0200, Peter Zijlstra wrote:
pv_wait_head():
pv_hash()
/* MB as per cmpxchg */
cmpxchg(&l->locked, _Q_LOCKED_VAL, _Q_SLOW_VAL);
VS
__pv_queue_spin_unlock():
if (xchg(&l->locked, 0
On 04/01/2015 05:03 PM, Peter Zijlstra wrote:
On Wed, Apr 01, 2015 at 03:58:58PM -0400, Waiman Long wrote:
On 04/01/2015 02:48 PM, Peter Zijlstra wrote:
I am sorry that I don't quite get what you mean here. My point is that in
the hashing step, a cpu will need to scan an empty bucket to pu
On 04/01/2015 01:12 PM, Peter Zijlstra wrote:
On Wed, Apr 01, 2015 at 12:20:30PM -0400, Waiman Long wrote:
After more careful reading, I think the assumption that the presence of an
unused bucket means there is no match is not true. Consider the scenario:
1. cpu 0 puts lock1 into hb[0]
2. cpu
On 04/01/2015 02:48 PM, Peter Zijlstra wrote:
On Wed, Apr 01, 2015 at 02:54:45PM -0400, Waiman Long wrote:
On 04/01/2015 02:17 PM, Peter Zijlstra wrote:
On Wed, Apr 01, 2015 at 07:42:39PM +0200, Peter Zijlstra wrote:
Hohumm.. time to think more I think ;-)
So bear with me, I've not r
On 04/01/2015 02:17 PM, Peter Zijlstra wrote:
On Wed, Apr 01, 2015 at 07:42:39PM +0200, Peter Zijlstra wrote:
Hohumm.. time to think more I think ;-)
So bear with me, I've not really pondered this well so it could be full
of holes (again).
After the cmpxchg(&l->locked, _Q_LOCKED_VAL, _Q_SLOW_V
On 03/19/2015 08:25 AM, Peter Zijlstra wrote:
On Thu, Mar 19, 2015 at 11:12:42AM +0100, Peter Zijlstra wrote:
So I was now thinking of hashing the lock pointer; let me go and quickly
put something together.
A little something like so; ideally we'd allocate the hashtable since
NR_CPUS is kinda b
On 03/30/2015 12:29 PM, Peter Zijlstra wrote:
On Mon, Mar 30, 2015 at 12:25:12PM -0400, Waiman Long wrote:
I did it differently in my PV portion of the qspinlock patch. Instead of
just waking up the CPU, the new lock holder will check if the new queue head
has been halted. If so, it will set
On 03/27/2015 10:07 AM, Konrad Rzeszutek Wilk wrote:
On Thu, Mar 26, 2015 at 09:21:53PM +0100, Peter Zijlstra wrote:
On Wed, Mar 25, 2015 at 03:47:39PM -0400, Konrad Rzeszutek Wilk wrote:
Ah nice. That could be spun out as a seperate patch to optimize the existing
ticket locks I presume.
Yes I
On 03/25/2015 03:47 PM, Konrad Rzeszutek Wilk wrote:
On Mon, Mar 16, 2015 at 02:16:13PM +0100, Peter Zijlstra wrote:
Hi Waiman,
As promised; here is the paravirt stuff I did during the trip to BOS last week.
All the !paravirt patches are more or less the same as before (the only real
change is
On 03/19/2015 08:25 AM, Peter Zijlstra wrote:
On Thu, Mar 19, 2015 at 11:12:42AM +0100, Peter Zijlstra wrote:
So I was now thinking of hashing the lock pointer; let me go and quickly
put something together.
A little something like so; ideally we'd allocate the hashtable since
NR_CPUS is kinda b
On 03/19/2015 06:01 AM, Peter Zijlstra wrote:
On Wed, Mar 18, 2015 at 10:45:55PM -0400, Waiman Long wrote:
On 03/16/2015 09:16 AM, Peter Zijlstra wrote:
I do have some concern about this call site patching mechanism as the
modification is not atomic. The spin_unlock() calls are in many places
On 03/16/2015 09:16 AM, Peter Zijlstra wrote:
Hi Waiman,
As promised; here is the paravirt stuff I did during the trip to BOS last week.
All the !paravirt patches are more or less the same as before (the only real
change is the copyright lines in the first patch).
The paravirt stuff is 'simple
. For the time being, unlock call site patching will
not be part of this patch series.
Peter Zijlstra (3):
qspinlock: Add pending bit
qspinlock: Optimize for smaller NR_CPUS
qspinlock: Revert to test-and-set on hypervisors
Waiman Long (8):
qspinlock: A simple generic 4-byte queue spinloc
optimization which will make the queue spinlock code perform
better than the generic implementation.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra
---
arch/x86/Kconfig |1 +
arch/x86/include/asm/qspinlock.h | 25 +
arch/x86/include/asm
Signed-off-by: Waiman Long
---
include/asm-generic/qspinlock_types.h | 12 +++-
kernel/locking/qspinlock.c| 119 +++--
2 files changed, 107 insertions(+), 24 deletions(-)
diff --git a/include/asm-generic/qspinlock_types.h
b/include/asm-generic/qspinlo
lock is acquired, the queue node can be released to
be used later.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra
---
include/asm-generic/qspinlock.h | 132 +
include/asm-generic/qspinlock_types.h | 58 +
kernel/Kconfig.locks
ded to make the qspinlock achieve performance
parity with ticket spinlock at light load.
All this is horribly broken on Alpha pre EV56 (and any other arch that
cannot do single-copy atomic byte stores).
Signed-off-by: Peter Zijlstra
Signed-off-by: Waiman Long
---
include/asm-gene
locked bit
into a new clear_pending_set_locked() function.
This patch also simplifies the trylock operation before queuing by
calling queue_spin_trylock() directly.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra
---
include/asm-generic/qspinlock_types.h |2 +
kernel/locking
imeUsr Time
-- -
ticketlock 2075 10.00 216.35 3.49
qspinlock 3023 10.00 198.20 4.80
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra
---
kernel/locking/qsp
From: Peter Zijlstra
When we detect a hypervisor (!paravirt, see qspinlock paravirt support
patches), revert to a simple test-and-set lock to avoid the horrors
of queue preemption.
Signed-off-by: Peter Zijlstra
Signed-off-by: Waiman Long
---
arch/x86/include/asm/qspinlock.h | 14
its cpu number in whichever node is pointed to by the tail part
of the lock word. Secondly, pv_link_and_wait_node() will propagate the
existing head from the old to the new tail node.
Signed-off-by: Waiman Long
---
arch/x86/include/asm/paravirt.h | 22 ++
arch/x86/include/asm
This patch renames the paravirt_ticketlocks_enabled static key to a
more generic paravirt_spinlocks_enabled name.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra
---
arch/x86/include/asm/spinlock.h |4 ++--
arch/x86/kernel/kvm.c|2 +-
arch/x86/kernel
This patch adds the necessary XEN specific code to allow XEN to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Signed-off-by: Waiman Long
---
arch/x86/xen/spinlock.c | 149 +--
kernel/Kconfig.locks|2
obably caused
by the fact that contended qspinlock produces much less cacheline
contention than contended ticket spinlock and the test system is an
8-socket server.
Signed-off-by: Waiman Long
---
arch/x86/kernel/kvm.c | 143 -
kernel/Kconfig.locks |
On 10/27/2014 02:02 PM, Konrad Rzeszutek Wilk wrote:
On Mon, Oct 27, 2014 at 01:38:20PM -0400, Waiman Long wrote:
My concern is that spin_unlock() can be called in many places, including
loadable kernel modules. Can the paravirt_patch_ident_32() function able to
patch all of them in reasonable
On 11/03/2014 05:35 AM, Peter Zijlstra wrote:
On Wed, Oct 29, 2014 at 04:19:09PM -0400, Waiman Long wrote:
arch/x86/include/asm/pvqspinlock.h| 411 +
I do wonder why all this needs to live in x86..
I haven't looked into the para-virtualization co
On 10/29/2014 03:05 PM, Waiman Long wrote:
On 10/27/2014 05:22 PM, Waiman Long wrote:
On 10/27/2014 02:04 PM, Peter Zijlstra wrote:
On Mon, Oct 27, 2014 at 01:38:20PM -0400, Waiman Long wrote:
On 10/24/2014 04:54 AM, Peter Zijlstra wrote:
On Thu, Oct 16, 2014 at 02:10:38PM -0400, Waiman Long
optimization which will make the queue spinlock code perform
better than the generic implementation.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra
---
arch/x86/Kconfig |1 +
arch/x86/include/asm/qspinlock.h | 25 +
arch/x86/include/asm
lock is acquired, the queue node can be released to
be used later.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra
---
include/asm-generic/qspinlock.h | 118 +++
include/asm-generic/qspinlock_types.h | 58 +
kernel/Kconfig.locks |7
Signed-off-by: Waiman Long
---
include/asm-generic/qspinlock_types.h | 12 +++-
kernel/locking/qspinlock.c| 119 +++--
2 files changed, 107 insertions(+), 24 deletions(-)
diff --git a/include/asm-generic/qspinlock_types.h
b/include/asm-generic/qspinlo
imeUsr Time
-- -
ticketlock 2075 10.00 216.35 3.49
qspinlock 3023 10.00 198.20 4.80
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra
---
kernel/locking/qsp
locked bit
into a new clear_pending_set_locked() function.
This patch also simplifies the trylock operation before queuing by
calling queue_spin_trylock() directly.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra
---
include/asm-generic/qspinlock_types.h |2 +
kernel/locking
From: Peter Zijlstra
When we detect a hypervisor (!paravirt, see qspinlock paravirt support
patches), revert to a simple test-and-set lock to avoid the horrors
of queue preemption.
Signed-off-by: Peter Zijlstra
Signed-off-by: Waiman Long
---
arch/x86/include/asm/qspinlock.h | 14
ded to make the qspinlock achieve performance
parity with ticket spinlock at light load.
All this is horribly broken on Alpha pre EV56 (and any other arch that
cannot do single-copy atomic byte stores).
Signed-off-by: Peter Zijlstra
Signed-off-by: Waiman Long
---
include/asm-gene
its cpu number in whichever node is pointed to by the tail part
of the lock word. Secondly, pv_link_and_wait_node() will propagate the
existing head from the old to the new tail node.
Signed-off-by: Waiman Long
---
arch/x86/include/asm/paravirt.h | 19 ++
arch/x86/include/asm
This patch renames the paravirt_ticketlocks_enabled static key to a
more generic paravirt_spinlocks_enabled name.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra
---
arch/x86/include/asm/spinlock.h |4 ++--
arch/x86/kernel/kvm.c|2 +-
arch/x86/kernel
This patch adds the necessary XEN specific code to allow XEN to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Signed-off-by: Waiman Long
---
arch/x86/xen/spinlock.c | 149 +--
kernel/Kconfig.locks|2
n than
the AIM7 disk workload. In this case, the unfairlock performs worse
than both the PV ticketlock and qspinlock. The performance of the 2
PV locks are comparable.
Signed-off-by: Waiman Long
---
arch/x86/kernel/kvm.c | 138 -
kernel/Kconfig.locks
ode
to make more efficient use of the lock or finer granularity ones. The
main purpose is to make the lock contention problems more tolerable
until someone can spend the time and effort to fix them.
Peter Zijlstra (3):
qspinlock: Add pending bit
qspinlock: Optimize for smaller NR_CPUS
qspinloc
On 10/27/2014 05:22 PM, Waiman Long wrote:
On 10/27/2014 02:04 PM, Peter Zijlstra wrote:
On Mon, Oct 27, 2014 at 01:38:20PM -0400, Waiman Long wrote:
On 10/24/2014 04:54 AM, Peter Zijlstra wrote:
On Thu, Oct 16, 2014 at 02:10:38PM -0400, Waiman Long wrote:
Since enabling paravirt spinlock
On 10/27/2014 02:04 PM, Peter Zijlstra wrote:
On Mon, Oct 27, 2014 at 01:38:20PM -0400, Waiman Long wrote:
On 10/24/2014 04:54 AM, Peter Zijlstra wrote:
On Thu, Oct 16, 2014 at 02:10:38PM -0400, Waiman Long wrote:
Since enabling paravirt spinlock will disable unlock function inlining,
a jump
On 10/27/2014 02:02 PM, Konrad Rzeszutek Wilk wrote:
On Mon, Oct 27, 2014 at 01:38:20PM -0400, Waiman Long wrote:
On 10/24/2014 04:54 AM, Peter Zijlstra wrote:
On Thu, Oct 16, 2014 at 02:10:38PM -0400, Waiman Long wrote:
Since enabling paravirt spinlock will disable unlock function inlining
On 10/27/2014 01:27 PM, Peter Zijlstra wrote:
On Mon, Oct 27, 2014 at 01:15:53PM -0400, Waiman Long wrote:
On 10/24/2014 06:04 PM, Peter Zijlstra wrote:
On Fri, Oct 24, 2014 at 04:53:27PM -0400, Waiman Long wrote:
The additional register pressure may just cause a few more register moves
which
On 10/24/2014 04:57 AM, Peter Zijlstra wrote:
On Thu, Oct 16, 2014 at 02:10:29PM -0400, Waiman Long wrote:
v11->v12:
- Based on PeterZ's version of the qspinlock patch
(https://lkml.org/lkml/2014/6/15/63).
- Incorporated many of the review comments from Konrad Wilk and
Paolo
On 10/24/2014 04:54 AM, Peter Zijlstra wrote:
On Thu, Oct 16, 2014 at 02:10:38PM -0400, Waiman Long wrote:
Since enabling paravirt spinlock will disable unlock function inlining,
a jump label can be added to the unlock function without adding patch
sites all over the kernel.
But you don
On 10/24/2014 06:04 PM, Peter Zijlstra wrote:
On Fri, Oct 24, 2014 at 04:53:27PM -0400, Waiman Long wrote:
The additional register pressure may just cause a few more register moves
which should be negligible in the overall performance . The additional
icache pressure, however, may have some
On 10/24/2014 04:47 AM, Peter Zijlstra wrote:
On Thu, Oct 16, 2014 at 02:10:38PM -0400, Waiman Long wrote:
+static inline void pv_init_node(struct mcs_spinlock *node)
+{
+ struct pv_qnode *pn = (struct pv_qnode *)node;
+
+ BUILD_BUG_ON(sizeof(struct pv_qnode)> 5*sizeof(str
Signed-off-by: Waiman Long
---
include/asm-generic/qspinlock_types.h | 12 +++-
kernel/locking/qspinlock.c| 119 +++--
2 files changed, 107 insertions(+), 24 deletions(-)
diff --git a/include/asm-generic/qspinlock_types.h
b/include/asm-generic/qspinlo
ra (3):
qspinlock: Add pending bit
qspinlock: Optimize for smaller NR_CPUS
qspinlock: Revert to test-and-set on hypervisors
Waiman Long (8):
qspinlock: A simple generic 4-byte queue spinlock
qspinlock, x86: Enable x86-64 to use queue spinlock
qspinlock: Extract out code snippets for the n
optimization which will make the queue spinlock code perform
better than the generic implementation.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra
---
arch/x86/Kconfig |1 +
arch/x86/include/asm/qspinlock.h | 25 +
arch/x86/include/asm
lock is acquired, the queue node can be released to
be used later.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra
---
include/asm-generic/qspinlock.h | 118 +++
include/asm-generic/qspinlock_types.h | 58 +
kernel/Kconfig.locks |7
locked bit
into a new clear_pending_set_locked() function.
This patch also simplifies the trylock operation before queuing by
calling queue_spin_trylock() directly.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra
---
include/asm-generic/qspinlock_types.h |2 +
kernel/locking
From: Peter Zijlstra
When we detect a hypervisor (!paravirt, see qspinlock paravirt support
patches), revert to a simple test-and-set lock to avoid the horrors
of queue preemption.
Signed-off-by: Peter Zijlstra
Signed-off-by: Waiman Long
---
arch/x86/include/asm/qspinlock.h | 14
imeUsr Time
-- -
ticketlock 2075 10.00 216.35 3.49
qspinlock 3023 10.00 198.20 4.80
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra
---
kernel/locking/qsp
ded to make the qspinlock achieve performance
parity with ticket spinlock at light load.
All this is horribly broken on Alpha pre EV56 (and any other arch that
cannot do single-copy atomic byte stores).
Signed-off-by: Peter Zijlstra
Signed-off-by: Waiman Long
---
include/asm-gene
This patch renames the paravirt_ticketlocks_enabled static key to a
more generic paravirt_spinlocks_enabled name.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra
---
arch/x86/include/asm/spinlock.h |4 ++--
arch/x86/kernel/kvm.c|2 +-
arch/x86/kernel
-off-by: Waiman Long
---
arch/x86/include/asm/paravirt.h | 20 ++
arch/x86/include/asm/paravirt_types.h | 20 ++
arch/x86/include/asm/pvqspinlock.h| 403 +
arch/x86/include/asm/qspinlock.h | 44 -
arch/x86/kernel/paravirt-spinlocks.c |6
n than
the AIM7 disk workload. In this case, the unfairlock performs worse
than both the PV ticketlock and qspinlock. The performance of the 2
PV locks are comparable.
Signed-off-by: Waiman Long
---
arch/x86/kernel/kvm.c | 138 -
kernel/Kconfig.locks
This patch adds the necessary XEN specific code to allow XEN to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Signed-off-by: Waiman Long
---
arch/x86/xen/spinlock.c | 149 +--
kernel/Kconfig.locks|2
On 06/18/2014 09:50 AM, Konrad Rzeszutek Wilk wrote:
On Wed, Jun 18, 2014 at 01:37:45PM +0200, Paolo Bonzini wrote:
Il 17/06/2014 22:55, Konrad Rzeszutek Wilk ha scritto:
On Sun, Jun 15, 2014 at 02:47:01PM +0200, Peter Zijlstra wrote:
From: Waiman Long
This patch extracts the logic for the
On 06/18/2014 08:03 AM, Paolo Bonzini wrote:
Il 17/06/2014 00:08, Waiman Long ha scritto:
+void __pv_queue_unlock(struct qspinlock *lock)
+{
+int val = atomic_read(&lock->val);
+
+native_queue_unlock(lock);
+
+if (val & _Q_LOCKED_SLOW)
+___pv_kick_head(lock);
+}
On 06/17/2014 05:10 PM, Konrad Rzeszutek Wilk wrote:
On Tue, Jun 17, 2014 at 05:07:29PM -0400, Konrad Rzeszutek Wilk wrote:
On Tue, Jun 17, 2014 at 04:51:57PM -0400, Waiman Long wrote:
On 06/17/2014 04:36 PM, Konrad Rzeszutek Wilk wrote:
On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter
On 06/17/2014 04:36 PM, Konrad Rzeszutek Wilk wrote:
On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote:
Because the qspinlock needs to touch a second cacheline; add a pending
bit and allow a single in-word spinner before we punt to the second
cacheline.
Could you add this in the de
1 - 100 of 222 matches
Mail list logo