Signed-off-by: Raghavendra K T
Noting pause loop exited vcpu helps in filtering right candidate to yield.
Yielding to same vcpu may result in more wastage of cpu.
From: Raghavendra K T
---
arch/x86/include/asm/kvm_host.h |7 +++
arch/x86/kvm/svm.c |1 +
arch/x86/kvm
From: Raghavendra K T
Currently PLE handler can repeatedly do a directed yield to same vcpu
that has recently done PL exit. This can degrade the performance
Try to yield to most eligible guy instead, by alternate yielding.
Precisely, give chance to a VCPU which has:
(a) Not done PLE exit at
26.11137%
+---+---+---++---+
kernbench 1x: 4 fast runs = 12 runs avg
kernbench 2x: 4 fast runs = 12 runs avg
sysbench 1x: 8runs avg
sysbench 2x: 8runs avg
ebizzy 1x: 8runs avg
ebizzy 2x: 8runs avg
Thanks Vatsa and Srikar for brainstorming discussions regarding
optimizations.
Raghavendra K T (2)
On 07/09/2012 11:50 AM, Raghavendra K T wrote:
Signed-off-by: Raghavendra K T
Noting pause loop exited vcpu helps in filtering right candidate to yield.
Yielding to same vcpu may result in more wastage of cpu.
From: Raghavendra K T
---
Oops. Sorry some how sign-off and from interchanged
On 07/09/2012 01:25 PM, Christian Borntraeger wrote:
On 09/07/12 08:20, Raghavendra K T wrote:
Currently Pause Looop Exit (PLE) handler is doing directed yield to a
random VCPU on PL exit. Though we already have filtering while choosing
the candidate to yield_to, we can do better.
Problem is
On 07/10/2012 03:17 AM, Andrew Theurer wrote:
> On Mon, 2012-07-09 at 11:50 +0530, Raghavendra K T wrote:
>> Currently Pause Looop Exit (PLE) handler is doing directed yield to a
>> random VCPU on PL exit. Though we already have filtering while choosing
>> the candidate t
On 07/10/2012 03:17 AM, Andrew Theurer wrote:
On Mon, 2012-07-09 at 11:50 +0530, Raghavendra K T wrote:
Currently Pause Looop Exit (PLE) handler is doing directed yield to a
random VCPU on PL exit. Though we already have filtering while choosing
the candidate to yield_to, we can do better
On 07/10/2012 04:09 AM, Rik van Riel wrote:
On 07/09/2012 02:20 AM, Raghavendra K T wrote:
@@ -484,6 +484,13 @@ struct kvm_vcpu_arch {
u64 length;
u64 status;
} osvw;
+
+ /* Pause loop exit optimization */
+ struct {
+ bool pause_loop_exited;
+ bool dy_eligible;
+ } plo;
I know kvm_vcpu_arch
On 07/10/2012 04:00 AM, Rik van Riel wrote:
On 07/09/2012 02:20 AM, Raghavendra K T wrote:
+bool kvm_arch_vcpu_check_and_update_eligible(struct kvm_vcpu *vcpu)
+{
+ bool eligible;
+
+ eligible = !vcpu->arch.plo.pause_loop_exited ||
+ (vcpu->arch.plo.pause_loop_exited&am
On 07/10/2012 03:17 AM, Andrew Theurer wrote:
On Mon, 2012-07-09 at 11:50 +0530, Raghavendra K T wrote:
Currently Pause Looop Exit (PLE) handler is doing directed yield to a
random VCPU on PL exit. Though we already have filtering while choosing
the candidate to yield_to, we can do better.
Hi
28 % -0.04 % 105 %
2x7 %0.83 % 26 %
---
Link for V1: (It also has result)
https://lkml.org/lkml/2012/7/9/32
Raghavendra K T (2):
kvm vcpu: Note down pause loop exit
kvm PLE handler: Choose better candidate for directed yield
arch/s390/inclu
From: Raghavendra K T
Noting pause loop exited vcpu helps in filtering right candidate to yield.
Yielding to same vcpu may result in more wastage of cpu.
Signed-off-by: Raghavendra K T
---
arch/x86/include/asm/kvm_host.h | 11 +++
arch/x86/kvm/svm.c |1 +
arch/x86
From: Raghavendra K T
Currently PLE handler can repeatedly do a directed yield to same vcpu
that has recently done PL exit. This can degrade the performance.
Try to yield to most eligible guy instead by alternate yielding.
Precisely, give chance to a VCPU which has:
(a) Not done PLE exit at
On 07/11/2012 02:20 AM, Ingo Molnar wrote:
* Raghavendra K T wrote:
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1595,6 +1595,9 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me)
continue;
if (waitqueue_active(&vcpu
On 07/11/2012 02:23 PM, Avi Kivity wrote:
On 07/09/2012 09:20 AM, Raghavendra K T wrote:
Signed-off-by: Raghavendra K T
Noting pause loop exited vcpu helps in filtering right candidate to yield.
Yielding to same vcpu may result in more wastage of cpu.
struct kvm_lpage_info {
diff --git a
On 07/11/2012 03:47 PM, Christian Borntraeger wrote:
On 11/07/12 11:06, Avi Kivity wrote:
[...]
Almost all s390 kernels use diag9c (directed yield to a given guest cpu) for
spinlocks, though.
Perhaps x86 should copy this.
See arch/s390/lib/spinlock.c
The basic idea is using several heuristi
On 07/11/2012 04:48 PM, Avi Kivity wrote:
On 07/11/2012 01:52 PM, Raghavendra K T wrote:
On 07/11/2012 02:23 PM, Avi Kivity wrote:
On 07/09/2012 09:20 AM, Raghavendra K T wrote:
Signed-off-by: Raghavendra K T
Noting pause loop exited vcpu helps in filtering right candidate to
yield.
Yielding
On 07/11/2012 05:25 PM, Christian Borntraeger wrote:
On 11/07/12 13:51, Raghavendra K T wrote:
Almost all s390 kernels use diag9c (directed yield to a given guest cpu) for
spinlocks, though.
Perhaps x86 should copy this.
See arch/s390/lib/spinlock.c
The basic idea is using several
On 07/11/2012 05:21 PM, Raghavendra K T wrote:
On 07/11/2012 03:47 PM, Christian Borntraeger wrote:
On 11/07/12 11:06, Avi Kivity wrote:
[...]
So there is no win here, but there are other cases were diag44 is
used, e.g. cpu_relax.
I have to double check with others, if these cases are
On 07/11/2012 02:30 PM, Avi Kivity wrote:
On 07/10/2012 12:47 AM, Andrew Theurer wrote:
For the cpu threads in the host that are actually active (in this case
1/2 of them), ~50% of their time is in kernel and ~43% in guest. This
is for a no-IO workload, so that's just incredible to see so much
On 07/11/2012 07:29 PM, Raghavendra K T wrote:
On 07/11/2012 02:30 PM, Avi Kivity wrote:
On 07/10/2012 12:47 AM, Andrew Theurer wrote:
For the cpu threads in the host that are actually active (in this case
1/2 of them), ~50% of their time is in kernel and ~43% in guest. This
is for a no-IO
On 07/11/2012 05:09 PM, Avi Kivity wrote:
On 07/11/2012 02:18 PM, Christian Borntraeger wrote:
On 11/07/12 13:04, Avi Kivity wrote:
On 07/11/2012 01:17 PM, Christian Borntraeger wrote:
On 11/07/12 11:06, Avi Kivity wrote:
[...]
Almost all s390 kernels use diag9c (directed yield to a given gue
On 07/12/2012 01:45 PM, Avi Kivity wrote:
On 07/11/2012 05:01 PM, Raghavendra K T wrote:
On 07/11/2012 07:29 PM, Raghavendra K T wrote:
On 07/11/2012 02:30 PM, Avi Kivity wrote:
On 07/10/2012 12:47 AM, Andrew Theurer wrote:
For the cpu threads in the host that are actually active (in this
On 07/12/2012 01:41 PM, Avi Kivity wrote:
On 07/12/2012 08:11 AM, Raghavendra K T wrote:
Ah, I thouht you objected to the CONFIG var. Maybe call it
cpu_relax_intercepted since that's the linuxy name for the instruction.
Ok, just to be on same page. 'll have :
1. cpu_relax_i
On 07/12/2012 04:28 PM, Nikunj A Dadhania wrote:
On Wed, 11 Jul 2012 16:22:29 +0530, Raghavendra K
T wrote:
On 07/11/2012 02:23 PM, Avi Kivity wrote:
This adds some tiny overhead to vcpu entry. You could remove it by
using the vcpu->requests mechanism to clear the flag, since
v
und 87%, 23% for 1x,2x
Links
V1: https://lkml.org/lkml/2012/7/9/32
V2: https://lkml.org/lkml/2012/7/10/392
Raghavendra K T (3):
config: Add config to support ple or cpu relax optimzation
kvm : Note down when cpu relax intercepted or pause loop exited
kvm : Choose a better can
From: Raghavendra K T
Signed-off-by: Raghavendra K T
---
arch/s390/kvm/Kconfig |1 +
arch/x86/kvm/Kconfig |1 +
virt/kvm/Kconfig |3 +++
3 files changed, 5 insertions(+), 0 deletions(-)
diff --git a/arch/s390/kvm/Kconfig b/arch/s390/kvm/Kconfig
index 78eb984..a6e2677
From: Raghavendra K T
Noting pause loop exited or cpu relax intercepted vcpu helps in
filtering right candidate to yield. Wrong selection of vcpu;
i.e., a vcpu that just did a pl-exit or cpu relax intercepted may
contribute to performance degradation.
Signed-off-by: Raghavendra K T
---
v2
From: Raghavendra K T
Currently, on a large vcpu guests, there is a high probability of
yielding to the same vcpu who had recently done a pause-loop exit or
cpu relax intercepted. Such a yield can lead to the vcpu spinning
again and hence degrade the performance.
The patchset keeps track of the
On 07/13/2012 12:47 AM, Raghavendra K T wrote:
Currently Pause Loop Exit (PLE) handler is doing directed yield to a
random vcpu on pl-exit. We already have filtering while choosing
the candidate to yield_to. This change adds more checks while choosing
a candidate to yield_to.
On a large vcpu
On 07/13/2012 01:32 AM, Christian Borntraeger wrote:
On 12/07/12 21:18, Raghavendra K T wrote:
+#ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT
[...]
+ struct {
+ bool cpu_relax_intercepted;
+ bool dy_eligible;
+ } ple;
+#endif
[...]
}
vcpu
On 07/13/2012 11:43 AM, Christian Borntraeger wrote:
On 13/07/12 05:35, Raghavendra K T wrote:
maybe define static inline access functions in kvm_host.h that are no-ops
if CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT is not set.
As I already said, can you have a look at using access functions?
Yes
On 07/13/2012 07:24 PM, Srikar Dronamraju wrote:
On 12/07/12 21:18, Raghavendra K T wrote:
+#ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT
[...]
+ struct {
+ bool cpu_relax_intercepted;
+ bool dy_eligible;
+ } ple;
+#endif
[...]
}
vcpu
ves by around 30%, 6% for 1x,2x respectively
ebizzyimproves by around 87%, 23% for 1x,2x respectively
Note: The patches are tested on x86.
Links
V1: https://lkml.org/lkml/2012/7/9/32
V2: https://lkml.org/lkml/2012/7/10/392
V3: https://lkml.org/lkml/2012/7/12/437
Raghavendra
From: Raghavendra K T
Noting pause loop exited vcpu or cpu relax intercepted helps in
filtering right candidate to yield. Wrong selection of vcpu;
i.e., a vcpu that just did a pl-exit or cpu relax intercepted may
contribute to performance degradation.
Signed-off-by: Raghavendra K T
---
V2 was
From: Raghavendra K T
Suggested-by: Avi Kivity
Signed-off-by: Raghavendra K T
---
arch/s390/kvm/Kconfig |1 +
arch/x86/kvm/Kconfig |1 +
virt/kvm/Kconfig |3 +++
3 files changed, 5 insertions(+), 0 deletions(-)
diff --git a/arch/s390/kvm/Kconfig b/arch/s390/kvm/Kconfig
From: Raghavendra K T
Currently, on a large vcpu guests, there is a high probability of
yielding to the same vcpu who had recently done a pause-loop exit or
cpu relax intercepted. Such a yield can lead to the vcpu spinning
again and hence degrade the performance.
The patchset keeps track of the
On 07/16/2012 09:40 PM, Rik van Riel wrote:
On 07/16/2012 06:07 AM, Avi Kivity wrote:
+{
+ bool eligible;
+
+ eligible = !vcpu->ple.cpu_relax_intercepted ||
+ (vcpu->ple.cpu_relax_intercepted&&
+ vcpu->ple.dy_eligible);
+
+ if (vcpu->ple.cpu_relax_intercepted)
+ vcpu->ple.dy_eligible = !vcpu->p
On 07/16/2012 03:31 PM, Avi Kivity wrote:
On 07/16/2012 11:25 AM, Raghavendra K T wrote:
From: Raghavendra K T
Noting pause loop exited vcpu or cpu relax intercepted helps in
filtering right candidate to yield. Wrong selection of vcpu;
i.e., a vcpu that just did a pl-exit or cpu relax
On 07/16/2012 03:37 PM, Avi Kivity wrote:
On 07/16/2012 11:25 AM, Raghavendra K T wrote:
From: Raghavendra K T
Currently, on a large vcpu guests, there is a high probability of
yielding to the same vcpu who had recently done a pause-loop exit or
cpu relax intercepted. Such a yield can lead to
On 07/17/2012 01:52 PM, Avi Kivity wrote:
On 07/16/2012 08:24 PM, Raghavendra K T wrote:
So are you saying allow vcpu to spin in non over-commit scenarios? So
that we avoid all yield_to etc...
( Or even in some other place where it is useful).
When is yielding useful, if you'r
On 07/17/2012 01:59 PM, Avi Kivity wrote:
On 07/16/2012 07:10 PM, Rik van Riel wrote:
On 07/16/2012 06:07 AM, Avi Kivity wrote:
+{
+bool eligible;
+
+eligible = !vcpu->ple.cpu_relax_intercepted ||
+(vcpu->ple.cpu_relax_intercepted&&
+ vcpu->ple.dy_eligible);
+
+
On 07/17/2012 02:39 PM, Raghavendra K T wrote:
[...]
But
if vcpu A is spinning for x% of its time and processing on the other,
then vcpu B will flip its dy_eligible for those x%, and not flip it when
it's processing. I don't understand how this is useful.
Suppose A is doing reall
ttps://lkml.org/lkml/2012/7/12/437
V2: https://lkml.org/lkml/2012/7/10/392
V1: https://lkml.org/lkml/2012/7/9/32
Raghavendra K T (3):
config: Add config to support ple or cpu relax optimzation
kvm : Note down when cpu relax intercepted or pause loop exited
kvm : Choose a bette
From: Raghavendra K T
Suggested-by: Avi Kivity
Signed-off-by: Raghavendra K T
---
arch/s390/kvm/Kconfig |1 +
arch/x86/kvm/Kconfig |1 +
virt/kvm/Kconfig |3 +++
3 files changed, 5 insertions(+), 0 deletions(-)
diff --git a/arch/s390/kvm/Kconfig b/arch/s390/kvm/Kconfig
From: Raghavendra K T
Currently, on a large vcpu guests, there is a high probability of
yielding to the same vcpu who had recently done a pause-loop exit or
cpu relax intercepted. Such a yield can lead to the vcpu spinning
again and hence degrade the performance.
The patchset keeps track of the
From: Raghavendra K T
Noting pause loop exited vcpu or cpu relax intercepted helps in
filtering right candidate to yield. Wrong selection of vcpu;
i.e., a vcpu that just did a pl-exit or cpu relax intercepted may
contribute to performance degradation.
Signed-off-by: Raghavendra K T
---
V2 was
On 07/18/2012 07:08 PM, Raghavendra K T wrote:
From: Raghavendra K T
+bool kvm_vcpu_eligible_for_directed_yield(struct kvm_vcpu *vcpu)
+{
+ bool eligible;
+
+ eligible = !vcpu->spin_loop.in_spin_loop ||
+ (vcpu->spin_loop.in_spi
From: Raghavendra K T
Currently, on a large vcpu guests, there is a high probability of
yielding to the same vcpu who had recently done a pause-loop exit or
cpu relax intercepted. Such a yield can lead to the vcpu spinning
again and hence degrade the performance.
The patchset keeps track of the
(next eligible lock holder)
Signed-off-by: Raghavendra K T
---
V2 was:
Reviewed-by: Rik van Riel
Changelog: Added comment on locking as suggested by Avi
include/linux/kvm_host.h |5 +
virt/kvm/kvm_main.c | 42 ++
2 files changed, 47
On 07/20/2012 11:06 PM, Marcelo Tosatti wrote:
On Wed, Jul 18, 2012 at 07:07:17PM +0530, Raghavendra K T wrote:
Currently Pause Loop Exit (PLE) handler is doing directed yield to a
random vcpu on pl-exit. We already have filtering while choosing
the candidate to yield_to. This change adds more
From: Raghavendra K T
Thanks Alex for KVM_HC_FEATURES inputs and Jan for VAPIC_POLL_IRQ,
and Peter (HPA) for suggesting hypercall ABI addition.
Signed-off-by: Raghavendra K T
---
Please have a closer look at Hypercall ABI newly added
Changes since last post:
- Added hypercall ABI (Peter
On 07/24/2012 05:43 PM, Alexander Graf wrote:
On 07/24/2012 10:53 AM, Raghavendra K T wrote:
From: Raghavendra K T
Thanks Alex for KVM_HC_FEATURES inputs and Jan for VAPIC_POLL_IRQ,
and Peter (HPA) for suggesting hypercall ABI addition.
Signed-off-by: Raghavendra K T
---
Please have a
On 08/01/2012 08:37 AM, Marcelo Tosatti wrote:
On Tue, Jul 24, 2012 at 02:23:59PM +0530, Raghavendra K T wrote:
From: Raghavendra K T
Thanks Alex for KVM_HC_FEATURES inputs and Jan for VAPIC_POLL_IRQ,
and Peter (HPA) for suggesting hypercall ABI addition.
Signed-off-by: Raghavendra K T
On 08/01/2012 11:55 PM, Marcelo Tosatti wrote:
On Wed, Aug 01, 2012 at 04:19:01PM +0530, Raghavendra K T wrote:
On 08/01/2012 08:37 AM, Marcelo Tosatti wrote:
On Tue, Jul 24, 2012 at 02:23:59PM +0530, Raghavendra K T wrote:
From: Raghavendra K T
Thanks Alex for KVM_HC_FEATURES inputs and Jan
post:
- Added hypercall ABI (Peter)
- made KVM_HC_VAPIC_POLL_IRQ active explicitly (Randy)
- Changed vmrun/vmmrun ==> vmcall/vmmcall (Marcelo)
- use Linux KVM hypercall instead of ABI (Marcelo)
- correct PowerPC typo (Alex)
- Remove value field (Alex)
Raghavendra K T (2):
Documentation/
From: Raghavendra K T
Thanks Alex for KVM_HC_FEATURES inputs and Jan for VAPIC_POLL_IRQ,
and Peter (HPA) for suggesting hypercall ABI addition.
Signed-off-by: Raghavendra K T
---
TODO: We need to add history details of each hypercall as suggested by HPA,
which I could not trace easily. Hope it
From: Alexander Graf
Signed-off-by: Alexander Graf
Signed-off-by: Raghavendra K T
---
Documentation/virtual/kvm/ppc-pv.txt | 22 ++
1 files changed, 22 insertions(+), 0 deletions(-)
diff --git a/Documentation/virtual/kvm/ppc-pv.txt
b/Documentation/virtual/kvm/ppc
From: Raghavendra K T
Signed-off-by: Raghavendra K T
---
arch/x86/include/asm/kvm_para.h |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
index 2f7712e..20f5697 100644
--- a/arch/x86/include/asm
On 08/07/2012 01:10 PM, Raghavendra K T wrote:
From: Alexander Graf
Signed-off-by: Alexander Graf
Signed-off-by: Raghavendra K T
---
Sorry it meant to be 3/3.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kerne
On 08/07/2012 01:09 PM, Raghavendra K T wrote:
This is the hypercall documentation patch series
First patch covers KVM specific hypercall information.
second patch has typo fix for vmcall instruction
comment in kvm_para.h
Third patch includes a very useful documentation on PowerPC
hypercalls
On 08/10/2012 12:01 AM, Marcelo Tosatti wrote:
On Tue, Aug 07, 2012 at 01:09:46PM +0530, Raghavendra K T wrote:
This is the hypercall documentation patch series
First patch covers KVM specific hypercall information.
second patch is has typo fix for vmcall instruction
comment in kvm_para.h
(after wrapping)
Thanks Nikunj for his quick verification of the patch.
Please let me know if this patch is interesting and makes sense.
8<
From: Raghavendra K T
Currently we use next vcpu to last boosted vcpu as starting point
while deciding eligible vcpu for directed yield.
On 09/02/2012 09:59 PM, Rik van Riel wrote:
On 09/02/2012 06:12 AM, Gleb Natapov wrote:
On Thu, Aug 30, 2012 at 12:51:01AM +0530, Raghavendra K T wrote:
The idea of starting from next vcpu (source of yield_to + 1) seem to
work
well for overcomitted guest rather than using last boosted vcpu. We
66.101 (119.313) 160.056 (117.446) 3.63935
case 2x: 167.421 (120.767) 158.133 (115.022) 5.54769
case 3x: 169.317 (122.088) 159.353 (116.737) 5.88482
Srivatsa Vaddagiri, Suzuki Poulose, Raghavendra K T (5):
Add debugfs support to print u32-ar
Renaming of xen functions and change unsigned to u32.
Signed-off-by: Srivatsa Vaddagiri
Signed-off-by: Suzuki Poulose
Signed-off-by: Raghavendra K T
---
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index fc506e6..14a8961 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86
Add debugfs support to print u32-arrays in debugfs. Move the code from Xen to
debugfs
to make the code common for other users as well.
Signed-off-by: Srivatsa Vaddagiri
Signed-off-by: Suzuki Poulose
Signed-off-by: Raghavendra K T
---
diff --git a/arch/x86/xen/debugfs.c b/arch/x86/xen
Vaddagiri
Signed-off-by: Suzuki Poulose
Signed-off-by: Raghavendra K T
---
diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
index 734c376..2874c19 100644
--- a/arch/x86/include/asm/kvm_para.h
+++ b/arch/x86/include/asm/kvm_para.h
@@ -16,12 +16,14 @@
#define
Added configuration support to enable debug information
for KVM Guests in debugfs
Signed-off-by: Srivatsa Vaddagiri
Signed-off-by: Suzuki Poulose
Signed-off-by: Raghavendra K T
---
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 1f03f82..ed34269 100644
--- a/arch/x86/Kconfig
+++ b
pv_lock_ops.
Signed-off-by: Srivatsa Vaddagiri
Signed-off-by: Suzuki Poulose
Signed-off-by: Raghavendra K T
---
diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
index 2874c19..c7f34b7 100644
--- a/arch/x86/include/asm/kvm_para.h
+++ b/arch/x86/include/asm/kvm_para.h
On 10/24/2011 03:49 AM, Greg KH wrote:
On Mon, Oct 24, 2011 at 12:34:59AM +0530, Raghavendra K T wrote:
Renaming of xen functions and change unsigned to u32.
Why not just rename when you move the functions? Why the extra step?
Intention was only clarity. Yes, if this patch is an overhead, I
On 10/24/2011 03:50 AM, Greg KH wrote:
On Mon, Oct 24, 2011 at 12:34:04AM +0530, Raghavendra K T wrote:
Add debugfs support to print u32-arrays in debugfs. Move the code from Xen to
debugfs
to make the code common for other users as well.
You forgot the kerneldoc for the function explaining
On 10/24/2011 03:31 PM, Sasha Levin wrote:
On Mon, 2011-10-24 at 00:37 +0530, Raghavendra K T wrote:
+#else /* CONFIG_PARAVIRT_SPINLOCKS */
+#define kvm_guest_early_init() do { } while (0)
This should be defined as an empty function.
Yes Agree, I 'll change to an empty function.
-
On 10/24/2011 03:31 PM, Sasha Levin wrote:
On Mon, 2011-10-24 at 00:35 +0530, Raghavendra K T wrote:
Add two hypercalls to KVM hypervisor to support pv-ticketlocks.
+static void kvm_pv_kick_cpu_op(struct kvm *kvm, int cpu)
+{
+ struct kvm_vcpu *vcpu = kvm_get_vcpu(kvm, cpu);
+
+ if
On 10/24/2011 03:44 PM, Avi Kivity wrote:
On 10/23/2011 09:05 PM, Raghavendra K T wrote:
Add two hypercalls to KVM hypervisor to support pv-ticketlocks.
+
+end_wait:
+ finish_wait(&vcpu->wq,&wait);
+}
This hypercall can be replaced by a HLT instruction, no?
I'm pretty
On 10/24/2011 03:31 PM, Sasha Levin wrote:
On Mon, 2011-10-24 at 00:37 +0530, Raghavendra K T wrote:
Added configuration support to enable debug information
for KVM Guests in debugfs
+config KVM_DEBUG_FS
+ bool "Enable debug information for KVM Guests in debugfs"
+
On 10/24/2011 03:45 PM, Avi Kivity wrote:
On 10/23/2011 09:07 PM, Raghavendra K T wrote:
Added configuration support to enable debug information
for KVM Guests in debugfs
Signed-off-by: Srivatsa Vaddagiri
Signed-off-by: Suzuki Poulose
Signed-off-by: Raghavendra K T
---
diff --git a/arch/x86
On 10/24/2011 07:20 PM, Srivatsa Vaddagiri wrote:
* Avi Kivity [2011-10-24 15:09:25]:
I guess with that change, we can also dropthe need for other hypercall
introduced in this patch (kvm_pv_kick_cpu_op()). Essentially a vcpu sleeping
because of HLT instruction can be woken up by a IPI issued b
On 10/26/2011 04:04 PM, Avi Kivity wrote:
On 10/25/2011 08:24 PM, Raghavendra K T wrote:
CCing Ryan also
So then do also you foresee the need for directed yield at some point,
to address LHP? provided we have good improvements to prove.
Doesn't this patchset completely eliminate lock h
On 10/26/2011 12:04 AM, Jeremy Fitzhardinge wrote:
On 10/23/2011 12:07 PM, Raghavendra K T wrote:
This patch extends Linux guests running on KVM hypervisor to support
+/*
+ * Setup pv_lock_ops to exploit KVM_FEATURE_WAIT_FOR_KICK if present.
+ * This needs to be setup really early in boot
On 10/26/2011 12:05 AM, Jeremy Fitzhardinge wrote:
On 10/23/2011 12:07 PM, Raghavendra K T wrote:
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+
+#ifdef CONFIG_KVM_DEBUG_FS
+
+#include
+
+#endif /* CONFIG_KVM_DEBUG_FS */
+
+#endif /* CONFIG_PARAVIRT_SPINLOCKS */
This is a big mess. Is there any
On 10/27/2011 01:16 AM, Jeremy Fitzhardinge wrote:
On 10/26/2011 12:23 PM, Raghavendra K T wrote:
On 10/26/2011 12:04 AM, Jeremy Fitzhardinge wrote:
On 10/23/2011 12:07 PM, Raghavendra K T wrote:
our current aim was to have before any printk happens.
So I 'll trim the comment to somet
enabling 32 bit guests
- splitted patches into two more chunks
Srivatsa Vaddagiri, Suzuki Poulose, Raghavendra K T (4):
Add debugfs support to print u32-arrays in debugfs
Add a hypercall to KVM hypervisor to support pv-ticketlocks
Added configuration support to enable debug information for
Add debugfs support to print u32-arrays in debugfs. Move the code from Xen to
debugfs
to make the code common for other users as well.
Signed-off-by: Srivatsa Vaddagiri
Signed-off-by: Suzuki Poulose
Signed-off-by: Raghavendra K T
---
diff --git a/arch/x86/xen/debugfs.c b/arch/x86/xen
Added configuration support to enable debug information
for KVM Guests in debugfs
Signed-off-by: Srivatsa Vaddagiri
Signed-off-by: Suzuki Poulose
Signed-off-by: Raghavendra K T
---
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 5d8152d..526e3ae 100644
--- a/arch/x86/Kconfig
+++ b
.
Signed-off-by: Srivatsa Vaddagiri
Signed-off-by: Suzuki Poulose
Signed-off-by: Raghavendra K T
---
diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
index 8b1d65d..7e419ad 100644
--- a/arch/x86/include/asm/kvm_para.h
+++ b/arch/x86/include/asm/kvm_para.h
@@ -195,10
presence of this feature to
guest via cpuid. Patch to qemu will be sent separately.
There is no Xen/KVM hypercall interface to await kick from.
Signed-off-by: Srivatsa Vaddagiri
Signed-off-by: Suzuki Poulose
Signed-off-by: Raghavendra K T
---
diff --git a/arch/x86/include/asm/kvm_para.h b
[ CCing Jeremy's new email ID ]
Hi Avi,
Thanks for review and inputs.
On 12/01/2011 04:41 PM, Avi Kivity wrote:
On 11/30/2011 10:59 AM, Raghavendra K T wrote:
The hypercall needs to be documented in
Documentation/virtual/kvm/hypercalls.txt.
Yes, Sure 'll document. hypercalls.tx
On 12/02/2011 01:20 AM, Raghavendra K T wrote:
+ struct kvm_mp_state mp_state;
+
+ mp_state.mp_state = KVM_MP_STATE_RUNNABLE;
+ if (vcpu) {
+ vcpu->kicked = 1;
+ /* Ensure kicked is always set before wakeup */
+ barrier();
+ }
+ kvm_arch_vcpu_ioctl_set_mpstate(vcpu,&mp_state);
This mu
On 12/02/2011 01:20 AM, Raghavendra K T wrote:
Have you tested it on AMD machines? There are some differences in the
hypercall infrastructure there.
Yes. 'll test on AMD machine and update on that.
I tested the code on 64 bit Dual-Core AMD Opteron machine, and it is
working.
-
From: Raghavendra K T
Three patch series following this, extends KVM-hypervisor
and Linux guest running on KVM-hypervisor to support pv-ticket spinlocks.
PV ticket spinlock helps to solve Lock Holder Preemption problem discussed in
http://www.amd64.org/fileadmin/user_upload/pub/LHP
Update the kvm kernel headers to the 3.2.0-rc1 post using
scripts/update-linux-headers.sh script.
Signed-off-by: Raghavendra K T
---
diff --git a/linux-headers/asm-powerpc/kvm.h b/linux-headers/asm-powerpc/kvm.h
index fb3fddc..08fe69e 100644
--- a/linux-headers/asm-powerpc/kvm.h
+++ b/linux
Update the kernel header that adds a hypercall to support pv-ticketlocks.
Signed-off-by: Raghavendra K T
---
diff --git a/linux-headers/asm-x86/kvm_para.h b/linux-headers/asm-x86/kvm_para.h
index f2ac46a..03d3a36 100644
--- a/linux-headers/asm-x86/kvm_para.h
+++ b/linux-headers/asm-x86
Extend the KVM Hypervisor to enable KICK_VCPU feature that allows
a vcpu to kick the halted vcpu to continue with execution in PV ticket
spinlock.
Signed-off-by: Srivatsa Vaddagiri
Signed-off-by: Raghavendra K T
---
diff --git a/target-i386/kvm.c b/target-i386/kvm.c
index 5bfc21f..69bce21
On 12/06/2011 08:57 AM, Konrad Rzeszutek Wilk wrote:
+static inline void add_stats(enum kvm_contention_stat var, int val)
You probably want 'int val' to be 'u32 val' as that is the type
in contention_stats.
Yes. Thanks for pointing, as its cumulative. It is indeed u32 in #else
:).I 'll ch
On 12/07/2011 04:18 PM, Marcelo Tosatti wrote:
On Wed, Nov 30, 2011 at 02:29:59PM +0530, Raghavendra K T wrote:
+/*
+ * kvm_pv_kick_cpu_op: Kick a vcpu.
+ *
+ * @cpu - vcpu to be kicked.
+ */
+static void kvm_pv_kick_cpu_op(struct kvm *kvm, int cpu)
+{
+ struct kvm_vcpu *vcpu
On 12/07/2011 06:03 PM, Marcelo Tosatti wrote:
On Wed, Dec 07, 2011 at 05:24:59PM +0530, Raghavendra K T wrote:
On 12/07/2011 04:18 PM, Marcelo Tosatti wrote:
Yes you are right. It was potentially racy and it was harmful too!.
I had observed that it was stalling the CPU before I introduced
On 12/07/2011 08:22 PM, Avi Kivity wrote:
On 12/07/2011 03:39 PM, Marcelo Tosatti wrote:
Also I think we can keep the kicked flag in vcpu->requests, no need for
new storage.
Was going to suggest it but it violates the currently organized
processing of entries at the beginning of vcpu_enter_gue
On 12/08/2011 03:10 PM, Avi Kivity wrote:
On 12/07/2011 06:46 PM, Raghavendra K T wrote:
On 12/07/2011 08:22 PM, Avi Kivity wrote:
On 12/07/2011 03:39 PM, Marcelo Tosatti wrote:
Also I think we can keep the kicked flag in vcpu->requests, no need
for
new storage.
Was going to suggest it
On 12/26/2011 07:37 PM, Avi Kivity wrote:
On 12/19/2011 04:11 PM, Jan Kiszka wrote:
Backwards compatibility
If we want backwards compatibility, we need more than just a simple feature
check, no? Oh, you feed that into CPUID? That's nifty. Ok, so you behave like
VMX/SVM do on real hardware -
1 - 100 of 571 matches
Mail list logo