From: Jeremy Fitzhardinge
Replace the old Xen implementation of PV spinlocks with and implementation
of xen_lock_spinning and xen_unlock_kick.
xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
adds itself to the waiting_cpus set, and blocks on an event channel
until the
From: Jeremy Fitzhardinge
Increment ticket head/tails by 2 rather than 1 to leave the LSB free
to store a "is in slowpath state" bit. This halves the number
of possible CPUs for a given ticket size, but this shouldn't matter
in practice - kernels built for 32k+ CPU syste
From: Jeremy Fitzhardinge
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/xen/spinlock.c | 28 +---
1 files changed, 25 insertions(+), 3 deletions(-)
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 2ed5d05..7b89439 100644
--- a/arch/x86/xen
From: Jeremy Fitzhardinge
Use __ticket_t for the ticket argument to the pvops, to prevent
unnecessary zero-extension in the calling code.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/include/asm/paravirt.h |6 --
arch/x86/include/asm/spinlock_types.h |4
arch/x86/xen
From: Jeremy Fitzhardinge
Maintain a flag in both LSBs of the ticket lock which indicates whether
anyone is in the lock slowpath and may need kicking when the current
holder unlocks. The flags are set when the first locker enters
the slowpath, and cleared when unlocking to an empty queue.
In
From: Jeremy Fitzhardinge
We need to make sure interrupts are disabled while we're relying on the
contents of the per-cpu lock_waiting values, otherwise an interrupt
handler could come in, try to take some other lock, block, and overwrite
our values.
Signed-off-by: Jeremy Fitzhar
From: Jeremy Fitzhardinge
__ticket_unlock_kick() is now only called from known slowpaths, so there's
no need for it to do any checking of its own.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/include/asm/paravirt.h |2 +-
arch/x86/include/asm/spinlock.h | 14 --
2
From: Jeremy Fitzhardinge
This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism.
Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled
From: Jeremy Fitzhardinge
If we're releasing the lock into an uncontended state, there's nobody
waiting on it, so there's no need to kick anyone.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/kernel/paravirt-spinlocks.c |3 +--
1 files changed, 1 insertions(+), 2 del
From: Jeremy Fitzhardinge
Now that the paravirtualization layer doesn't exist at the spinlock
level any more, we can collapse the __ticket_ functions into the arch_
functions.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/include/asm/spinlock.h | 35 +
From: Jeremy Fitzhardinge
Rather than outright replacing the entire spinlock implementation in
order to paravirtualize it, keep the ticket lock implementation but add
a couple of pvops hooks on the slow patch (long spin on lock, unlocking
a contended lock).
Ticket locks have a number of nice
From: Jeremy Fitzhardinge
Otherwise the generated code for raw_spin_lock will look awful.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/include/asm/paravirt.h |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm
agiri
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/include/asm/spinlock.h |7 +++
arch/x86/kernel/paravirt-spinlocks.c | 23 ++-
2 files changed, 9 insertions(+), 21 deletions(-)
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
From: Jeremy Fitzhardinge
Although the lock_spinning calls in the spinlock code are on the
uncommon path, their presence can cause the compiler to generate many
more register save/restores in the function pre/postamble, which is in
the fast path. To avoid this, convert it to using the pvops
On 09/02/2011 07:47 AM, Peter Zijlstra wrote:
> On Thu, 2011-09-01 at 17:55 -0700, Jeremy Fitzhardinge wrote:
>> From: Jeremy Fitzhardinge
>>
>> We need to make sure interrupts are disabled while we're relying on the
>> contents of the per-cpu lock_waitin
On 09/02/2011 07:48 AM, Peter Zijlstra wrote:
> On Thu, 2011-09-01 at 17:55 -0700, Jeremy Fitzhardinge wrote:
>> + /* Make sure an interrupt handler can't upset things in a
>> + partially setup state. */
>> local_irq_save(flags);
>>
>
On 09/02/2011 11:46 AM, Eric Northup wrote:
> On Thu, Sep 1, 2011 at 5:54 PM, Jeremy Fitzhardinge wrote:
>> From: Jeremy Fitzhardinge
>>
>> Maintain a flag in both LSBs of the ticket lock which indicates whether
>> anyone is in the lock slowpath and may need kicking
On 09/02/2011 07:49 AM, Peter Zijlstra wrote:
> On Thu, 2011-09-01 at 17:55 -0700, Jeremy Fitzhardinge wrote:
>> From: Srivatsa Vaddagiri
>>
>> We must release the lock before checking to see if the lock is in
>> slowpath or else there's a potential race where
On 09/02/2011 08:38 AM, Linus Torvalds wrote:
> On Thu, Sep 1, 2011 at 5:54 PM, Jeremy Fitzhardinge wrote:
>> The inner part of ticket lock code becomes:
>>inc = xadd(&lock->tickets, inc);
>>inc.tail &= ~TICKET_SLOWPATH_FLAG;
>>
>>
On 09/02/2011 01:27 PM, Linus Torvalds wrote:
> On Fri, Sep 2, 2011 at 1:07 PM, Jeremy Fitzhardinge wrote:
>> I don't know whether that fastpath code is small enough to consider
>> inlining everywhere?
> No.
>
> There's no point in inlining something that
On 09/02/2011 01:47 PM, Peter Zijlstra wrote:
> On Fri, 2011-09-02 at 12:29 -0700, Jeremy Fitzhardinge wrote:
>>> I know that its generally considered bad form, but there's at least one
>>> spinlock that's only taken from NMI context and thus hasn't got any
From: Jeremy Fitzhardinge
The code size expands somewhat, and its probably better to just call
a function rather than inline it.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/Kconfig |3 +++
kernel/Kconfig.locks |2 +-
2 files changed, 4 insertions(+), 1 deletions(-)
diff --git
From: Jeremy Fitzhardinge
Now that the paravirtualization layer doesn't exist at the spinlock
level any more, we can collapse the __ticket_ functions into the arch_
functions.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/include/asm/spinlock.h | 35 +
From: Jeremy Fitzhardinge
Replace the old Xen implementation of PV spinlocks with and implementation
of xen_lock_spinning and xen_unlock_kick.
xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
adds itself to the waiting_cpus set, and blocks on an event channel
until the
From: Jeremy Fitzhardinge
Rather than outright replacing the entire spinlock implementation in
order to paravirtualize it, keep the ticket lock implementation but add
a couple of pvops hooks on the slow patch (long spin on lock, unlocking
a contended lock).
Ticket locks have a number of nice
From: Jeremy Fitzhardinge
Increment ticket head/tails by 2 rather than 1 to leave the LSB free
to store a "is in slowpath state" bit. This halves the number
of possible CPUs for a given ticket size, but this shouldn't matter
in practice - kernels built for 32k+ CPU syste
From: Jeremy Fitzhardinge
Maintain a flag in the LSB of the ticket lock tail which indicates
whether anyone is in the lock slowpath and may need kicking when
the current holder unlocks. The flags are set when the first locker
enters the slowpath, and cleared when unlocking to an empty queue (ie
From: Jeremy Fitzhardinge
If interrupts were enabled when taking the spinlock, we can leave them
enabled while blocking to get the lock.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/xen/spinlock.c | 42 +++---
1 files changed, 35 insertions(+), 7
From: Jeremy Fitzhardinge
Although the lock_spinning calls in the spinlock code are on the
uncommon path, their presence can cause the compiler to generate many
more register save/restores in the function pre/postamble, which is in
the fast path. To avoid this, convert it to using the pvops
From: Jeremy Fitzhardinge
[ Changes since last posting:
- fold all the cleanup/bugfix patches into their base patches
- change spin_lock to make sure fastpath has no cruft in it
- make sure it doesn't attempt to inline unlock
]
NOTE: this series is based on tip.git tip/x86/spinlocks
On 09/06/2011 08:14 AM, Don Zickus wrote:
> On Fri, Sep 02, 2011 at 02:50:53PM -0700, Jeremy Fitzhardinge wrote:
>> On 09/02/2011 01:47 PM, Peter Zijlstra wrote:
>>> On Fri, 2011-09-02 at 12:29 -0700, Jeremy Fitzhardinge wrote:
>>>>> I know that its generally co
On 09/06/2011 11:27 AM, Don Zickus wrote:
>> But on the other hand, I don't really care if you can say that this path
>> will never be called in a virtual machine.
> Does virtual machines support hot remove of cpus? Probably not
> considering bare-metal barely supports it.
The only reason you'd w
On 09/02/2011 04:22 AM, Stefano Stabellini wrote:
> do you have a git tree somewhere with this series?
git://github.com/jsgf/linux-xen.git upstream/pvticketlock-slowflag
J
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
On 09/07/2011 10:09 AM, Avi Kivity wrote:
> On 09/07/2011 07:52 PM, Don Zickus wrote:
>> >
>> > May I ask how? Detecting a back-to-back NMI?
>>
>> Pretty boring actually. Currently we execute an NMI handler until
>> one of
>> them returns handled. Then we stop. This may cause us to miss an
>>
On 09/07/2011 10:41 AM, Avi Kivity wrote:
>> Hm, I'm interested to know what you're thinking in more detail. Can you
>> leave an NMI pending before you block in the same way you can with
>> "sti;halt" with normal interrupts?
>
>
> Nope. But you can do
>
>if (regs->rip in critical section)
>
On 09/08/2011 12:51 AM, Avi Kivity wrote:
> On 09/07/2011 10:09 PM, Jeremy Fitzhardinge wrote:
>> On 09/07/2011 10:41 AM, Avi Kivity wrote:
>> >> Hm, I'm interested to know what you're thinking in more detail.
>> Can you
>> >> leave an NMI pend
From: Jeremy Fitzhardinge
Now that the paravirtualization layer doesn't exist at the spinlock
level any more, we can collapse the __ticket_ functions into the arch_
functions.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/include/asm/spinlock.h | 35 +
From: Jeremy Fitzhardinge
The note about partial registers is not really relevent now that we
rely on gcc to generate all the assembler.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/include/asm/spinlock.h |4
1 files changed, 0 insertions(+), 4 deletions(-)
diff --git a/arch/x86
From: Jeremy Fitzhardinge
Rather than outright replacing the entire spinlock implementation in
order to paravirtualize it, keep the ticket lock implementation but add
a couple of pvops hooks on the slow patch (long spin on lock, unlocking
a contended lock).
Ticket locks have a number of nice
From: Jeremy Fitzhardinge
Replace the old Xen implementation of PV spinlocks with and implementation
of xen_lock_spinning and xen_unlock_kick.
xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
adds itself to the waiting_cpus set, and blocks on an event channel
until the
From: Jeremy Fitzhardinge
Maintain a flag in the LSB of the ticket lock tail which indicates
whether anyone is in the lock slowpath and may need kicking when
the current holder unlocks. The flags are set when the first locker
enters the slowpath, and cleared when unlocking to an empty queue (ie
From: Jeremy Fitzhardinge
[ Changes since last posting:
- fix bugs exposed by the cold light of testing
- make the "slow flag" read in unlock cover the whole lock
to force ordering WRT the unlock write
- when kicking on unlock, only look for the CPU *we* released
From: Stefano Stabellini
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/xen/smp.c |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index e79dbb9..bf958ce 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -552,4 +552,5
From: Jeremy Fitzhardinge
If interrupts were enabled when taking the spinlock, we can leave them
enabled while blocking to get the lock.
If we can enable interrupts while waiting for the lock to become
available, and we take an interrupt before entering the poll,
and the handler takes a
From: Jeremy Fitzhardinge
The code size expands somewhat, and its probably better to just call
a function rather than inline it.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/Kconfig |3 +++
kernel/Kconfig.locks |2 +-
2 files changed, 4 insertions(+), 1 deletions(-)
diff --git
From: Jeremy Fitzhardinge
Although the lock_spinning calls in the spinlock code are on the
uncommon path, their presence can cause the compiler to generate many
more register save/restores in the function pre/postamble, which is in
the fast path. To avoid this, convert it to using the pvops
From: Jeremy Fitzhardinge
Increment ticket head/tails by 2 rather than 1 to leave the LSB free
to store a "is in slowpath state" bit. This halves the number
of possible CPUs for a given ticket size, but this shouldn't matter
in practice - kernels built for 32k+ CPU syste
On 09/27/2011 02:34 AM, Stephan Diestelhorst wrote:
> On Wednesday 14 September 2011, 17:31:32 Jeremy Fitzhardinge wrote:
>> This series replaces the existing paravirtualized spinlock mechanism
>> with a paravirtualized ticketlock mechanism.
> [...]
>> The unlock code
On 09/28/2011 09:10 AM, Linus Torvalds wrote:
> On Wed, Sep 28, 2011 at 8:55 AM, Jan Beulich wrote:
>>> just use "lock xaddw" there too.
>> I'm afraid that's not possible, as that might carry from the low 8 bits
>> into the upper 8 ones, which must be avoided.
> Oh damn, you're right. So I guess t
On 09/28/2011 06:58 AM, Stephan Diestelhorst wrote:
> I have tested this and have not seen it fail on publicly released AMD
> systems. But as I have tried to point out, this does not mean it is
> safe to do in software, because future microarchtectures may have more
> capable forwarding engines.
S
On 09/28/2011 10:24 AM, H. Peter Anvin wrote:
> On 09/28/2011 10:22 AM, Linus Torvalds wrote:
>> On Wed, Sep 28, 2011 at 9:47 AM, Jeremy Fitzhardinge wrote:
>>> Could do something like:
>>>
>>>if (ticket->head >= 254)
>>>
On 09/28/2011 11:08 AM, Stephan Diestelhorst wrote:
> On Wednesday 28 September 2011 19:50:08 Jeremy Fitzhardinge wrote:
>> On 09/28/2011 10:24 AM, H. Peter Anvin wrote:
>>> On 09/28/2011 10:22 AM, Linus Torvalds wrote:
>>>> On Wed, Sep 28, 2011 at 9:47 AM, Je
On 09/28/2011 11:49 AM, Linus Torvalds wrote:
> But I don't care all *that* deeply. I do agree that the xaddw trick is
> pretty tricky. I just happen to think that it's actually *less* tricky
> than "read the upper bits separately and depend on subtle ordering
> issues with another writer that happ
From: Jeremy Fitzhardinge
If interrupts were enabled when taking the spinlock, we can leave them
enabled while blocking to get the lock.
If we can enable interrupts while waiting for the lock to become
available, and we take an interrupt before entering the poll,
and the handler takes a
From: Stefano Stabellini
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/xen/smp.c |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 4dec905..2d01aeb 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -552,4 +552,5
From: Jeremy Fitzhardinge
Although the lock_spinning calls in the spinlock code are on the
uncommon path, their presence can cause the compiler to generate many
more register save/restores in the function pre/postamble, which is in
the fast path. To avoid this, convert it to using the pvops
From: Jeremy Fitzhardinge
Maintain a flag in the LSB of the ticket lock tail which indicates
whether anyone is in the lock slowpath and may need kicking when
the current holder unlocks. The flags are set when the first locker
enters the slowpath, and cleared when unlocking to an empty queue (ie
From: Jeremy Fitzhardinge
Increment ticket head/tails by 2 rather than 1 to leave the LSB free
to store a "is in slowpath state" bit. This halves the number
of possible CPUs for a given ticket size, but this shouldn't matter
in practice - kernels built for 32k+ CPU syste
From: Jeremy Fitzhardinge
Replace the old Xen implementation of PV spinlocks with and implementation
of xen_lock_spinning and xen_unlock_kick.
xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
adds itself to the waiting_cpus set, and blocks on an event channel
until the
From: Jeremy Fitzhardinge
The code size expands somewhat, and its probably better to just call
a function rather than inline it.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/Kconfig |3 +++
kernel/Kconfig.locks |2 +-
2 files changed, 4 insertions(+), 1 deletions(-)
diff --git
From: Jeremy Fitzhardinge
Rather than outright replacing the entire spinlock implementation in
order to paravirtualize it, keep the ticket lock implementation but add
a couple of pvops hooks on the slow patch (long spin on lock, unlocking
a contended lock).
Ticket locks have a number of nice
From: Jeremy Fitzhardinge
[ Changes since last posting:
- Stephan Diestelhorst pointed out
that my old unlock code was unsound, and could lead to deadlocks
(at least in principle). The new unlock code is definitely sound,
but likely slower as it introduces a locked xadd; this
From: Jeremy Fitzhardinge
Now that the paravirtualization layer doesn't exist at the spinlock
level any more, we can collapse the __ticket_ functions into the arch_
functions.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/include/asm/spinlock.h | 35 +
From: Jeremy Fitzhardinge
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/xen/spinlock.c | 14 ++
1 files changed, 14 insertions(+), 0 deletions(-)
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index f6133c5..1e21c99 100644
--- a/arch/x86/xen/spinlock.c
+++ b
From: Jeremy Fitzhardinge
There's no need to do it at very early init, and doing it there
makes it impossible to use the jump_label machinery.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/xen/smp.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/arch/x86/xen/
On 10/06/2011 07:04 AM, Stephan Diestelhorst wrote:
> On Wednesday 28 September 2011, 14:49:56 Linus Torvalds wrote:
>> Which certainly should *work*, but from a conceptual standpoint, isn't
>> it just *much* nicer to say "we actually know *exactly* what the upper
>> bits were".
> Well, we really d
On 10/06/2011 10:40 AM, Jeremy Fitzhardinge wrote:
> However, it looks like locked xadd is also has better performance: on
> my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
> than locked xadd, so that pretty much settles it unless you think
> there
On 10/10/2011 07:01 AM, Stephan Diestelhorst wrote:
> On Monday 10 October 2011, 07:00:50 Stephan Diestelhorst wrote:
>> On Thursday 06 October 2011, 13:40:01 Jeremy Fitzhardinge wrote:
>>> On 10/06/2011 07:04 AM, Stephan Diestelhorst wrote:
>>>> On Wednesday 28
On 10/10/2011 12:32 AM, Ingo Molnar wrote:
> * Jeremy Fitzhardinge wrote:
>
>> On 10/06/2011 10:40 AM, Jeremy Fitzhardinge wrote:
>>> However, it looks like locked xadd is also has better performance: on
>>> my Sandybridge laptop (2 cores, 4 threads), the add+mfence
From: Jeremy Fitzhardinge
The code size expands somewhat, and its probably better to just call
a function rather than inline it.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/Kconfig |3 +++
kernel/Kconfig.locks |2 +-
2 files changed, 4 insertions(+), 1 deletions(-)
diff --git
From: Jeremy Fitzhardinge
Although the lock_spinning calls in the spinlock code are on the
uncommon path, their presence can cause the compiler to generate many
more register save/restores in the function pre/postamble, which is in
the fast path. To avoid this, convert it to using the pvops
From: Stefano Stabellini
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/xen/smp.c |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 4dec905..2d01aeb 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -552,4 +552,5
From: Jeremy Fitzhardinge
If interrupts were enabled when taking the spinlock, we can leave them
enabled while blocking to get the lock.
If we can enable interrupts while waiting for the lock to become
available, and we take an interrupt before entering the poll,
and the handler takes a
From: Jeremy Fitzhardinge
Maintain a flag in the LSB of the ticket lock tail which indicates
whether anyone is in the lock slowpath and may need kicking when
the current holder unlocks. The flags are set when the first locker
enters the slowpath, and cleared when unlocking to an empty queue (ie
From: Jeremy Fitzhardinge
Increment ticket head/tails by 2 rather than 1 to leave the LSB free
to store a "is in slowpath state" bit. This halves the number
of possible CPUs for a given ticket size, but this shouldn't matter
in practice - kernels built for 32k+ CPU syste
From: Jeremy Fitzhardinge
Replace the old Xen implementation of PV spinlocks with and implementation
of xen_lock_spinning and xen_unlock_kick.
xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
adds itself to the waiting_cpus set, and blocks on an event channel
until the
From: Jeremy Fitzhardinge
There's no need to do it at very early init, and doing it there
makes it impossible to use the jump_label machinery.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/xen/smp.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/arch/x86/xen/
From: Jeremy Fitzhardinge
[ Changes since last posting:
- Use "lock add" for unlock operation rather than "lock xadd"; it is
equivalent to "add; mfence", but more efficient than both "lock
xadd" and "mfence".
I think this versio
From: Jeremy Fitzhardinge
Rather than outright replacing the entire spinlock implementation in
order to paravirtualize it, keep the ticket lock implementation but add
a couple of pvops hooks on the slow patch (long spin on lock, unlocking
a contended lock).
Ticket locks have a number of nice
From: Jeremy Fitzhardinge
Now that the paravirtualization layer doesn't exist at the spinlock
level any more, we can collapse the __ticket_ functions into the arch_
functions.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/include/asm/spinlock.h | 35 +
From: Jeremy Fitzhardinge
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/xen/spinlock.c | 14 ++
1 files changed, 14 insertions(+), 0 deletions(-)
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index f6133c5..1e21c99 100644
--- a/arch/x86/xen/spinlock.c
+++ b
On 10/13/2011 03:54 AM, Peter Zijlstra wrote:
> On Wed, 2011-10-12 at 17:51 -0700, Jeremy Fitzhardinge wrote:
>> This is is all unnecessary complication if you're not using PV ticket
>> locks, it also uses the jump-label machinery to use the standard
>> "add&q
On 10/14/2011 07:17 AM, Jason Baron wrote:
> On Thu, Oct 13, 2011 at 09:44:48AM -0700, Jeremy Fitzhardinge wrote:
>> pvops is basically a collection of ordinary _ops structures full of
>> function pointers, but it has a layer of patching to help optimise it.
>> In the c
On 10/14/2011 11:38 AM, H. Peter Anvin wrote:
> On 10/14/2011 11:35 AM, Jason Baron wrote:
>> A nice featuer of jump labels, is that it allows the various branches
>> (currently we only support 2), to be written in c code (as opposed to asm),
>> which means you can write your code as you normally w
On 10/14/2011 11:35 AM, Jason Baron wrote:
> On Fri, Oct 14, 2011 at 10:02:35AM -0700, Jeremy Fitzhardinge wrote:
>> On 10/14/2011 07:17 AM, Jason Baron wrote:
>>> On Thu, Oct 13, 2011 at 09:44:48AM -0700, Jeremy Fitzhardinge wrote:
>>>> pvops is basically a collecti
On 10/14/2011 11:37 AM, H. Peter Anvin wrote:
> On 10/14/2011 10:02 AM, Jeremy Fitzhardinge wrote:
>> Jump labels are essentially binary: you can use path A or path B. pvops
>> are multiway: there's no limit to the number of potential number of
>> paravirtualized hyper
On 10/24/2011 03:15 AM, Avi Kivity wrote:
> On 10/23/2011 09:07 PM, Raghavendra K T wrote:
>> Added configuration support to enable debug information
>> for KVM Guests in debugfs
>>
>> Signed-off-by: Srivatsa Vaddagiri
>> Signed-off-by: Suzuki Poulose
>> Signed-off-by: Raghavendra K T
>> --
On 10/23/2011 12:07 PM, Raghavendra K T wrote:
> This patch extends Linux guests running on KVM hypervisor to support
> pv-ticketlocks. Very early during bootup, paravirtualied KVM guest detects if
> the hypervisor has required feature (KVM_FEATURE_WAIT_FOR_KICK) to support
> pv-ticketlocks. If s
On 10/23/2011 12:07 PM, Raghavendra K T wrote:
> This patch extends Linux guests running on KVM hypervisor to support
> pv-ticketlocks. Very early during bootup, paravirtualied KVM guest detects if
> the hypervisor has required feature (KVM_FEATURE_WAIT_FOR_KICK) to support
> pv-ticketlocks. If s
On 10/26/2011 12:23 PM, Raghavendra K T wrote:
> On 10/26/2011 12:04 AM, Jeremy Fitzhardinge wrote:
>> On 10/23/2011 12:07 PM, Raghavendra K T wrote:
>>> This patch extends Linux guests running on KVM hypervisor to support
>>> +/*
>>> + * Setup pv_lock_ops to
On 10/26/2011 03:34 AM, Avi Kivity wrote:
> On 10/25/2011 08:24 PM, Raghavendra K T wrote:
>> So then do also you foresee the need for directed yield at some point,
>> to address LHP? provided we have good improvements to prove.
> Doesn't this patchset completely eliminate lock holder preemption?
On 01/19/2011 09:21 AM, Peter Zijlstra wrote:
> On Wed, 2011-01-19 at 22:42 +0530, Srivatsa Vaddagiri wrote:
>> Add two hypercalls to KVM hypervisor to support pv-ticketlocks.
>>
>> KVM_HC_WAIT_FOR_KICK blocks the calling vcpu until another vcpu kicks it or
>> it
>> is woken up because of an event
On 01/20/2011 03:42 AM, Srivatsa Vaddagiri wrote:
> On Wed, Jan 19, 2011 at 10:53:52AM -0800, Jeremy Fitzhardinge wrote:
>>> The reason for wanting this should be clear I guess, it allows PI.
>> Well, if we can expand the spinlock to include an owner, then all this
>>
On 01/20/2011 03:59 AM, Srivatsa Vaddagiri wrote:
>> At least in the Xen code, a current owner isn't very useful, because we
>> need the current owner to kick the *next* owner to life at release time,
>> which we can't do without some structure recording which ticket belongs
>> to which cpu.
> If w
On 01/22/2011 06:53 AM, Rik van Riel wrote:
> The main question that remains is whether the PV ticketlocks are
> a large enough improvement to also merge those. I expect they
> will be, and we'll see so in the benchmark numbers.
The pathological worst-case of ticket locks in a virtual environment
: Glauber Costa
> CC: Rik van Riel
> CC: Jeremy Fitzhardinge
> CC: Peter Zijlstra
> CC: Avi Kivity
> ---
> include/linux/sched.h |1 +
> kernel/sched.c| 41 +
> 2 files changed, 42 insertions(+), 0 deletions(-)
>
>
On 05/01/2012 03:59 AM, Peter Zijlstra wrote:
> On Tue, 2012-05-01 at 12:57 +0200, Peter Zijlstra wrote:
>> Anyway, I don't have any idea about the costs involved with
>> HAVE_RCU_TABLE_FREE, but I don't think its much.. otherwise these other
>> platforms (PPC,SPARC) wouldn't have used it, gup_fast
On 05/07/2012 06:49 AM, Avi Kivity wrote:
> On 05/07/2012 04:46 PM, Srivatsa Vaddagiri wrote:
>> * Raghavendra K T [2012-05-07 19:08:51]:
>>
>>> I 'll get hold of a PLE mc and come up with the numbers soon. but I
>>> 'll expect the improvement around 1-3% as it was in last version.
>> Deferring p
On 05/13/2012 11:45 AM, Raghavendra K T wrote:
> On 05/07/2012 08:22 PM, Avi Kivity wrote:
>
> I could not come with pv-flush results (also Nikunj had clarified that
> the result was on NOn PLE
>
>> I'd like to see those numbers, then.
>>
>> Ingo, please hold on the kvm-specific patches, meanwhile.
On 04/16/2012 09:36 AM, Ian Campbell wrote:
> On Mon, 2012-04-16 at 16:44 +0100, Konrad Rzeszutek Wilk wrote:
>> On Sat, Mar 31, 2012 at 09:37:45AM +0530, Srivatsa Vaddagiri wrote:
>>> * Thomas Gleixner [2012-03-31 00:07:58]:
>>>
I know that Peter is going to go berserk on me, but if we are r
1 - 100 of 168 matches
Mail list logo