in power states (which
we do not even emulate IIRC). In that case, the OS is under risk of
sleeping forever, thus need to look for a different wakeup source.
HPET will always be the default broadcast event device I think.
Regards,
Wanpeng Li
Live-migration or VM pausing are external effects on al
e
saved (unlikely) or because the "init optimization" caused it to not be
saved. This patch kill the assumption.
Signed-off-by: Wanpeng Li
---
Note: This patch against latest linus tree.
arch/x86/kvm/x86.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/x86/kvm/
R_CRASH in this patchset. :(
Regards,
Wanpeng Li
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
requests.
Actually I see more SIGUSR1 signals are intercepted by signal_pending()
in vcpu_enter_guest() and vcpu_run() w/ win7 guest and kernel_irqchip=off.
Regards,
Wanpeng Li
Signed-off-by: Radim Krčmář
---
arch/x86/kvm/vmx.c | 2 +-
arch/x86/kvm/x86.c | 6 ++
include
.
Last week I tested the availability status every 5 minutes on my
Wordpress VM's with the halt_poll_ns kernel param set on DOM0. I'm
pleased to announce that it solves the problem!
How much seconds to load your Wordpress site this time?
Regards,
Wanpeng Li
My guess is that a lot
ameter, halt_poll_ns_max, controls the maximal halt_poll_ns;
it is internally rounded down to a closest multiple of halt_poll_ns_grow.
Wanpeng Li (3):
KVM: make halt_poll_ns per-VCPU
KVM: dynamise halt_poll_ns adjustment
KVM: trace kvm_halt_poll_ns grow/shrink
include/linux/kvm_host.h | 1 +
i
Tracepoint for dynamic halt_pool_ns, fired on every potential change.
Signed-off-by: Wanpeng Li
---
include/trace/events/kvm.h | 30 ++
virt/kvm/kvm_main.c| 8
2 files changed, 38 insertions(+)
diff --git a/include/trace/events/kvm.h b/include
Change halt_poll_ns into per-VCPU variable, seeded from module parameter,
to allow greater flexibility.
Signed-off-by: Wanpeng Li
---
include/linux/kvm_host.h | 1 +
virt/kvm/kvm_main.c | 1 +
2 files changed, 2 insertions(+)
diff --git a/include/linux/kvm_host.h b/include/linux
ink
A third new parameter, halt_poll_ns_max, controls the maximal halt_poll_ns;
it is internally rounded down to a closest multiple of halt_poll_ns_grow.
Signed-off-by: Wanpeng Li
---
virt/kvm/kvm_main.c | 81 -
1 file changed, 80 insertions
Hi David,
On 8/25/15 1:00 AM, David Matlack wrote:
On Mon, Aug 24, 2015 at 5:53 AM, Wanpeng Li wrote:
There are two new kernel parameters for changing the halt_poll_ns:
halt_poll_ns_grow and halt_poll_ns_shrink. halt_poll_ns_grow affects
halt_poll_ns when an interrupt arrives and
On 8/25/15 12:59 AM, David Matlack wrote:
On Mon, Aug 24, 2015 at 5:53 AM, Wanpeng Li wrote:
Change halt_poll_ns into per-VCPU variable, seeded from module parameter,
to allow greater flexibility.
You should also change kvm_vcpu_block to read halt_poll_ns from
the vcpu instead of the module
Change halt_poll_ns into per-VCPU variable, seeded from module parameter,
to allow greater flexibility.
Signed-off-by: Wanpeng Li
---
include/linux/kvm_host.h | 1 +
virt/kvm/kvm_main.c | 5 +++--
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/include/linux/kvm_host.h b
otherwise| += halt_poll_ns_grow | -= halt_poll_ns_shrink
Wanpeng Li (3):
KVM: make halt_poll_ns per-VCPU
KVM: dynamic halt_poll_ns adjustment
KVM: trace kvm_halt_poll_ns grow/shrink
include/linux/kvm_host.h | 1 +
include/trace/events/kvm.h | 30 ++
virt/kvm/kvm_main.c
Tracepoint for dynamic halt_pool_ns, fired on every potential change.
Signed-off-by: Wanpeng Li
---
include/trace/events/kvm.h | 30 ++
virt/kvm/kvm_main.c| 8
2 files changed, 38 insertions(+)
diff --git a/include/trace/events/kvm.h b/include
_ns | = 0
< halt_poll_ns | *= halt_poll_ns_grow | /= halt_poll_ns_shrink
otherwise| += halt_poll_ns_grow | -= halt_poll_ns_shrink
Signed-off-by: Wanpeng Li
---
virt/kvm/kvm_main.c | 65 -
1 file changed, 64 insertions
On 8/25/15 11:42 PM, Hansa wrote:
On 24-8-2015 1:26, Wanpeng Li wrote:
On 8/24/15 3:18 AM, Hansa wrote:
On 16-7-2015 13:27, Paolo Bonzini wrote:
On 15/07/2015 22:02, "C. Bröcker" wrote:
What OS is this? Is it RHEL/CentOS? If so, halt_poll_ns will be
in 6.7
which will be out in
On 8/26/15 6:41 AM, Hansa wrote:
On 26-8-2015 0:33, Wanpeng Li wrote:
On the VM server I issued the command below every eleven minutes:
date >> curltest-file; _
top -b -n 1 | sed -n '7,12p' >> curltest-file; _
curl -o /dev/null -s -w"time_total: %{time_total}\\n&qu
On 8/26/15 1:19 AM, David Matlack wrote:
Thanks for writing v2, Wanpeng.
On Mon, Aug 24, 2015 at 11:35 PM, Wanpeng Li wrote:
There is a downside of halt_poll_ns since poll is still happen for idle
VCPU which can waste cpu usage. This patch adds the ability to adjust
halt_poll_ns dynamically
Tracepoint for dynamic halt_pool_ns, fired on every potential change.
Signed-off-by: Wanpeng Li
---
include/trace/events/kvm.h | 30 ++
virt/kvm/kvm_main.c| 8 ++--
2 files changed, 36 insertions(+), 2 deletions(-)
diff --git a/include/trace/events
_poll_ns_shrink. The shrink/grow matrix is
suggested by David:
if (poll successfully for interrupt): stay the same
else if (length of kvm_vcpu_block is longer than halt_poll_ns_max): shrink
else if (length of kvm_vcpu_block is less than halt_poll_ns_max): grow
Wanpeng Li (3):
KVM:
matrix is
suggested by David:
if (poll successfully for interrupt): stay the same
else if (length of kvm_vcpu_block is longer than halt_poll_ns_max): shrink
else if (length of kvm_vcpu_block is less than halt_poll_ns_max): grow
Signed-off-by: Wanpeng Li
---
virt/kvm/kvm_main.c | 43
Change halt_poll_ns into per-VCPU variable, seeded from module parameter,
to allow greater flexibility.
Signed-off-by: Wanpeng Li
---
include/linux/kvm_host.h | 1 +
virt/kvm/kvm_main.c | 5 +++--
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/include/linux/kvm_host.h b
The always halt-poll will increase ~0.9% cpu usage for idle vCPUs and the
dynamic halt-poll drop it to ~0.3% which means that reduce the 67% overhead
introduced by always halt-poll.
Wanpeng Li (3):
KVM: make halt_poll_ns per-VCPU
KVM: dynamic halt_poll_ns adju
Tracepoint for dynamic halt_pool_ns, fired on every potential change.
Signed-off-by: Wanpeng Li
---
include/trace/events/kvm.h | 30 ++
virt/kvm/kvm_main.c| 8 ++--
2 files changed, 36 insertions(+), 2 deletions(-)
diff --git a/include/trace/events
Change halt_poll_ns into per-VCPU variable, seeded from module parameter,
to allow greater flexibility.
Signed-off-by: Wanpeng Li
---
include/linux/kvm_host.h | 1 +
virt/kvm/kvm_main.c | 5 +++--
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/include/linux/kvm_host.h b
The always halt-poll will increase ~0.9% cpu usage for idle vCPUs and the
dynamic halt-poll drop it to ~0.3% which means that reduce the 67% overhead
introduced by always halt-poll.
Wanpeng Li (3):
KVM: make halt_poll_ns per-VCPU
KVM: dynamic halt_poll_ns adju
reduce the 67% overhead
introduced by always halt-poll.
Signed-off-by: Wanpeng Li
---
virt/kvm/kvm_main.c | 41 -
1 file changed, 40 insertions(+), 1 deletion(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index c06e57c..d63790d 100644
--- a
Hi David,
On 8/26/15 1:19 AM, David Matlack wrote:
Thanks for writing v2, Wanpeng.
On Mon, Aug 24, 2015 at 11:35 PM, Wanpeng Li wrote:
There is a downside of halt_poll_ns since poll is still happen for idle
VCPU which can waste cpu usage. This patch adds the ability to adjust
halt_poll_ns
On 8/28/15 12:25 AM, David Matlack wrote:
On Thu, Aug 27, 2015 at 2:59 AM, Wanpeng Li wrote:
Hi David,
On 8/26/15 1:19 AM, David Matlack wrote:
Thanks for writing v2, Wanpeng.
On Mon, Aug 24, 2015 at 11:35 PM, Wanpeng Li
wrote:
There is a downside of halt_poll_ns since poll is still happen
Hi Peter,
On 8/30/15 5:18 AM, Peter Kieser wrote:
Hi Wanpeng,
Do I need to set any module parameters to use your patch, or should
halt_poll_ns automatically tune with just your patch series applied?
You don't need any module parameters.
Regards,
Wanpeng Li
Thanks.
On 2015-08-27 2:
applied on your 3.18? Btw, do you test the linux guest?
Regards,
Wanpeng Li
qemu-system-x86_64 -enable-kvm -name arwan-20150704 -S -machine
pc-q35-2.2,accel=kvm,usb=off -cpu
Haswell,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1000 -m 8192
-realtime mlock=off -smp 4,sockets=4,cores=1,threads=1
On 8/30/15 8:13 AM, Peter Kieser wrote:
On 2015-08-29 4:55 PM, Wanpeng Li wrote:
On 8/30/15 6:26 AM, Peter Kieser wrote:
Thanks, Wanpeng. Applied this to Linux 3.18 and seeing much higher
CPU usage (200%) for qemu 2.4.0 process on a Windows 10 x64 guest.
qemu parameters:
Thanks for the
% |
+-++---+
Regards,
Wanpeng Li
qemu-system-x86_64 -enable-kvm -name arwan-20150704 -S -machine
pc-q35-2.2,accel=kvm,usb=off -cpu
Haswell,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1000 -m 8192
-realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid
7c2fc02d-2798-4fc9-ad04-db5f1af92723
On 8/31/15 3:44 PM, Wanpeng Li wrote:
On 8/30/15 6:26 AM, Peter Kieser wrote:
Thanks, Wanpeng. Applied this to Linux 3.18 and seeing much higher CPU
usage (200%) for qemu 2.4.0 process on a Windows 10 x64 guest. qemu
parameters:
Interesting. I test this against latest kvm tree and stable qemu
On 9/2/15 5:45 AM, David Matlack wrote:
On Thu, Aug 27, 2015 at 2:47 AM, Wanpeng Li wrote:
v3 -> v4:
* bring back grow vcpu->halt_poll_ns when interrupt arrives and shrinks
when idle VCPU is detected
v2 -> v3:
* grow/shrink vcpu->halt_poll_ns by *halt_pol
On 9/2/15 6:34 AM, David Matlack wrote:
On Tue, Sep 1, 2015 at 3:30 PM, Wanpeng Li wrote:
On 9/2/15 5:45 AM, David Matlack wrote:
On Thu, Aug 27, 2015 at 2:47 AM, Wanpeng Li
wrote:
v3 -> v4:
* bring back grow vcpu->halt_poll_ns when interrupt arrives and shrinks
when idle V
On 9/2/15 7:24 AM, David Matlack wrote:
On Tue, Sep 1, 2015 at 3:58 PM, Wanpeng Li wrote:
On 9/2/15 6:34 AM, David Matlack wrote:
On Tue, Sep 1, 2015 at 3:30 PM, Wanpeng Li wrote:
On 9/2/15 5:45 AM, David Matlack wrote:
On Thu, Aug 27, 2015 at 2:47 AM, Wanpeng Li
wrote:
v3 ->
On 9/2/15 9:49 AM, David Matlack wrote:
On Tue, Sep 1, 2015 at 5:29 PM, Wanpeng Li wrote:
On 9/2/15 7:24 AM, David Matlack wrote:
On Tue, Sep 1, 2015 at 3:58 PM, Wanpeng Li wrote:
Why this can happen?
Ah, probably because I'm missing 9c8fd1ba220 (KVM: x86: optimize delivery
o
Change halt_poll_ns into per-vCPU variable, seeded from module parameter,
to allow greater flexibility.
Signed-off-by: Wanpeng Li
---
include/linux/kvm_host.h | 1 +
virt/kvm/kvm_main.c | 5 +++--
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/include/linux/kvm_host.h b
The savings
should be even higher for higher frequency ticks.
Wanpeng Li (3):
KVM: make halt_poll_ns per-VCPU
KVM: dynamic halt_poll_ns adjustment
KVM: trace kvm_halt_poll_ns grow/shrink
include/linux/kvm_host.h | 1 +
include/trac
frequency ticks.
Suggested-by: David Matlack
Signed-off-by: Wanpeng Li
---
virt/kvm/kvm_main.c | 60 ++---
1 file changed, 57 insertions(+), 3 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index c06e57c..2206cb0 10
Tracepoint for dynamic halt_pool_ns, fired on every potential change.
Signed-off-by: Wanpeng Li
---
include/trace/events/kvm.h | 30 ++
virt/kvm/kvm_main.c| 8 ++--
2 files changed, 36 insertions(+), 2 deletions(-)
diff --git a/include/trace/events
Change halt_poll_ns into per-vCPU variable, seeded from module parameter,
to allow greater flexibility.
Signed-off-by: Wanpeng Li
---
include/linux/kvm_host.h | 1 +
virt/kvm/kvm_main.c | 5 +++--
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/include/linux/kvm_host.h b
s. (250HZ) means the guest was ticking at 250HZ.
The big win is with ticking operating systems. Running the linux guest
with nohz=off (and HZ=250), we save 3.4%~12.8% CPUs/second and get close
to no-polling overhead levels by using the dynamic-poll. The savings
should be even higher for higher fre
Suggested-by: David Matlack
Signed-off-by: Wanpeng Li
---
virt/kvm/kvm_main.c | 61 ++---
1 file changed, 58 insertions(+), 3 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index c06e57c..3cff02f 100644
--- a/virt/kvm/kvm_ma
Tracepoint for dynamic halt_pool_ns, fired on every potential change.
Signed-off-by: Wanpeng Li
---
include/trace/events/kvm.h | 30 ++
virt/kvm/kvm_main.c| 8 ++--
2 files changed, 36 insertions(+), 2 deletions(-)
diff --git a/include/trace/events
On 9/3/15 2:09 AM, David Matlack wrote:
On Wed, Sep 2, 2015 at 12:42 AM, Wanpeng Li wrote:
Tracepoint for dynamic halt_pool_ns, fired on every potential change.
Signed-off-by: Wanpeng Li
---
include/trace/events/kvm.h | 30 ++
virt/kvm/kvm_main.c| 8
us to loop calling
kvm_vcpu_block and starve the waiting task (at least until need_resched()),
which would break the "only hog the cpu when idle" aspect of halt-polling.
That's definitely a bug, yes.
Ok, I will send out v7 to fix this in this sunday since there is
vacation in my country
you want, you can leave this if() out and save some indentation.
Then we will miss the tracepoint.
Regards,
Wanpeng Li
+ if (block_ns <= vcpu->halt_poll_ns)
+ ;
+ /* we had a long block, shrink polling */
+ else if (vcpu->
sing the dynamic-poll. The savings
should be even higher for higher frequency ticks.
Wanpeng Li (3):
KVM: make halt_poll_ns per-vCPU
KVM: dynamic halt-polling
KVM: trace kvm_halt_poll_ns grow/shrink
include/linux/kvm_host.h | 1 +
include/trace/eve
Change halt_poll_ns into per-VCPU variable, seeded from module parameter,
to allow greater flexibility.
Signed-off-by: Wanpeng Li
---
include/linux/kvm_host.h | 1 +
virt/kvm/kvm_main.c | 5 +++--
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/include/linux/kvm_host.h b
ggested-by: David Matlack
Signed-off-by: Wanpeng Li
---
virt/kvm/kvm_main.c | 63 +
1 file changed, 59 insertions(+), 4 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index c06e57c..d5e07e9 100644
--- a/virt/kvm/kvm_main.c
+++
Tracepoint for dynamic halt_pool_ns, fired on every potential change.
Signed-off-by: Wanpeng Li
---
include/trace/events/kvm.h | 30 ++
virt/kvm/kvm_main.c| 8 ++--
2 files changed, 36 insertions(+), 2 deletions(-)
diff --git a/include/trace/events
On 9/4/15 12:07 AM, David Matlack wrote:
On Thu, Sep 3, 2015 at 2:23 AM, Wanpeng Li wrote:
How about something like:
@@ -1941,10 +1976,14 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
*/
if (kvm_vcpu_check_block(vcpu) < 0) {
++v
On 9/3/15 2:09 AM, David Matlack wrote:
On Wed, Sep 2, 2015 at 12:29 AM, Wanpeng Li wrote:
There is a downside of always-poll since poll is still happened for idle
vCPUs which can waste cpu usage. This patch adds the ability to adjust
halt_poll_ns dynamically, to grow halt_poll_ns when shot
On 9/4/15 9:16 AM, Wanpeng Li wrote:
[...]
+
static int kvm_vcpu_check_block(struct kvm_vcpu *vcpu)
{
if (kvm_arch_vcpu_runnable(vcpu)) {
@@ -1929,6 +1963,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
ktime_t start, cur;
DEFINE_WAIT(wait);
bool waited
Hi Paolo,
On 9/3/15 10:07 PM, Wanpeng Li wrote:
[...]
static int kvm_vcpu_check_block(struct kvm_vcpu *vcpu)
{
if (kvm_arch_vcpu_runnable(vcpu)) {
@@ -1928,7 +1962,8 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
{
ktime_t start, cur;
DEFINE_WAIT(wait);
- bool
On 9/6/15 10:32 PM, Paolo Bonzini wrote:
On 05/09/2015 00:38, Wanpeng Li wrote:
@@ -1940,11 +1975,16 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
* arrives.
*/
if (kvm_vcpu_check_block(vcpu) < 0) {
+polled = t
On 9/9/15 9:39 PM, Christian Borntraeger wrote:
Am 03.09.2015 um 16:07 schrieb Wanpeng Li:
v6 -> v7:
* explicit signal (set a bool)
* fix the tracepoint
v5 -> v6:
* fix wait_ns and poll_ns
v4 -> v5:
* set base case 10us and max poll time 500us
* handle short/long halt,
On 9/10/15 3:13 PM, Christian Borntraeger wrote:
Am 10.09.2015 um 03:55 schrieb Wanpeng Li:
On 9/9/15 9:39 PM, Christian Borntraeger wrote:
Am 03.09.2015 um 16:07 schrieb Wanpeng Li:
v6 -> v7:
* explicit signal (set a bool)
* fix the tracepoint
v5 -> v6:
* fix wait_ns and p
ng
when polling is disabled.
Reported-by: Christian Borntraeger
Signed-off-by: Wanpeng Li
---
virt/kvm/kvm_main.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 4662a88..f756cac0 100644
--- a/virt/kvm/kvm_main.c
+++ b/
flush when swithing between L1 and
L2. This patch gets ~3x performance improvement for lmbench 8p/64k ctxsw.
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/vmx.c | 39 ---
1 file changed, 32 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86
On 9/14/15 10:54 PM, Jan Kiszka wrote:
On 2015-09-14 14:52, Wanpeng Li wrote:
VPID is used to tag address space and avoid a TLB flush. Currently L0 use
the same VPID to run L1 and all its guests. KVM flushes VPID when switching
between L1 and L2.
This patch advertises VPID to the L1 hypervisor
On 9/15/15 12:08 AM, Bandan Das wrote:
Wanpeng Li writes:
VPID is used to tag address space and avoid a TLB flush. Currently L0 use
the same VPID to run L1 and all its guests. KVM flushes VPID when switching
between L1 and L2.
This patch advertises VPID to the L1 hypervisor, then address
Enhance allocate/free_vid to handle shadow vpid.
Suggested-by: Wincy Van
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/vmx.c | 33 +++--
1 file changed, 27 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index da1590e..bd07d88 100644
xsw
- - -- -- -- -- -- --- ---
kernelLinux 3.5.0-1 1.2200 1.3700 1.4500 4.7800 2.3300 5.6 2.88000
nested VPID
kernelLinux 3.5.0-1 1.2600 1.4300 1.5600 12.7 12.9 3.49000 7.46000
vanilla
Wanpeng Li (2):
KVM: nVMX: enhance allocate/free_vpid
2.88000
nested VPID
kernelLinux 3.5.0-1 1.2600 1.4300 1.5600 12.7 12.9 3.49000 7.46000
vanilla
Suggested-by: Wincy Van
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/vmx.c | 25 ++---
1 file changed, 18 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch
On 9/16/15 6:00 AM, David Matlack wrote:
On Tue, Sep 15, 2015 at 12:04 AM, Oliver Yang wrote:
Hi Guys,
I found below patch for KVM TSC trapping / migration support,
https://lkml.org/lkml/2011/1/6/90
It seemed the patch were not merged in Linux mainline.
So I have 3 questions here,
1. Can
On 9/16/15 1:32 AM, Jan Kiszka wrote:
On 2015-09-15 12:14, Wanpeng Li wrote:
On 9/14/15 10:54 PM, Jan Kiszka wrote:
Last but not least: the guest can now easily exhaust the host's pool of
vpid by simply spawning plenty of VCPUs for L2, no? Is this acceptable
or should there be some limi
Enhance allocate/free_vid to handle shadow vpid.
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/vmx.c | 24 +++-
1 file changed, 11 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 9ff6a3f..4956081 100644
--- a/arch/x86/kvm/vmx.c
+++ b
12.7 12.9 3.49000 7.46000
vanilla
Wanpeng Li (2):
KVM: nVMX: enhance allocate/free_vpid to handle shadow vpid
KVM: nVMX: nested VPID emulation
arch/x86/kvm/vmx.c | 61 +-
1 file changed, 42 insertions(+), 19 deletions(-)
--
1
2.88000
nested VPID
kernelLinux 3.5.0-1 1.2600 1.4300 1.5600 12.7 12.9 3.49000 7.46000
vanilla
Suggested-by: Wincy Van
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/vmx.c | 37 +++--
1 file changed, 31 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm
On 9/15/15 8:54 PM, Paolo Bonzini wrote:
On 15/09/2015 12:30, Wanpeng Li wrote:
+ if (!nested) {
+ vpid = find_first_zero_bit(vmx_vpid_bitmap, VMX_NR_VPIDS);
+ if (vpid < VMX_NR_VPIDS) {
vmx->vpid = vpid;
__set_bi
On 9/16/15 1:20 PM, Jan Kiszka wrote:
On 2015-09-16 04:36, Wanpeng Li wrote:
On 9/16/15 1:32 AM, Jan Kiszka wrote:
On 2015-09-15 12:14, Wanpeng Li wrote:
On 9/14/15 10:54 PM, Jan Kiszka wrote:
Last but not least: the guest can now easily exhaust the host's pool of
vpid by simply spa
On 9/16/15 2:42 PM, Jan Kiszka wrote:
On 2015-09-16 05:51, Wanpeng Li wrote:
Enhance allocate/free_vid to handle shadow vpid.
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/vmx.c | 24 +++-
1 file changed, 11 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b
Enhance allocate/free_vid to handle shadow vpid.
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/vmx.c | 23 +++
1 file changed, 11 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 9ff6a3f..c5222b8 100644
--- a/arch/x86/kvm/vmx.c
+++ b
2.88000
nested VPID
kernelLinux 3.5.0-1 1.2600 1.4300 1.5600 12.7 12.9 3.49000 7.46000
vanilla
Reviewed-by: Jan Kiszka
Suggested-by: Wincy Van
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/vmx.c | 37 +++--
1 file changed, 31 insertions(+), 6 deletions
.6 2.88000
nested VPID
kernelLinux 3.5.0-1 1.2600 1.4300 1.5600 12.7 12.9 3.49000 7.46000
vanilla
Wanpeng Li (2):
KVM: nVMX: enhance allocate/free_vpid to handle shadow vpid
KVM: nVMX: nested VPID emulation
arch/x86/kvm/vmx.c | 60 ++--
On 9/16/15 5:11 PM, Jan Kiszka wrote:
On 2015-09-16 09:19, Wanpeng Li wrote:
Enhance allocate/free_vid to handle shadow vpid.
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/vmx.c | 23 +++
1 file changed, 11 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b
Enhance allocate/free_vid to handle shadow vpid.
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/vmx.c | 25 -
1 file changed, 12 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 9ff6a3f..f8d704d 100644
--- a/arch/x86/kvm/vmx.c
+++ b
0-1 1.2200 1.3700 1.4500 4.7800 2.3300 5.6 2.88000
nested VPID
kernelLinux 3.5.0-1 1.2600 1.4300 1.5600 12.7 12.9 3.49000 7.46000
vanilla
Wanpeng Li (2):
KVM: nVMX: enhance allocate/free_vpid to handle shadow vpid
KVM: nVMX: nested VPID emulation
arch/x
2.88000
nested VPID
kernelLinux 3.5.0-1 1.2600 1.4300 1.5600 12.7 12.9 3.49000 7.46000
vanilla
Reviewed-by: Jan Kiszka
Suggested-by: Wincy Van
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/vmx.c | 37 +++--
1 file changed, 31 insertions(+), 6 deletions
On 9/16/15 6:04 PM, Paolo Bonzini wrote:
On 16/09/2015 11:30, Wanpeng Li wrote:
Enhance allocate/free_vid to handle shadow vpid.
Adjusting the commit message:
KVM: nVMX: adjust interface to allocate/free_vpid
Adjust allocate/free_vid so that they can be reused for the nested
eserve H for VMX root operation.
This patch fix it by reintroducing reserve H for VMX root operation.
Reported-by: Wincy Van
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/vmx.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 9ff6a3f..
ems to work (goes back to
1 cpu).
The proper way might be to feedback the result of the
interrupt dequeue into the heuristics. Don't know yet how
to handle that properly.
If this can be reproduced on x86 platform?
Regards,
Wanpeng Li
--
To unsubscribe from this list: send the line "unsubsc
eused while the value of
Looks good to me.
Regards,
Wanpeng Li
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 9/21/15 10:51 AM, Xiao Guangrong wrote:
Thanks for your report and analysis, Janusz!
On 09/19/2015 01:48 AM, Janusz wrote:
W dniu 18.09.2015 o 12:07, Laszlo Ersek pisze:
On 09/18/15 11:37, Janusz wrote:
Hello,
I am writting about this patch that was posted by Xiao:
http://www.spinics.net
Add the INVVPID instruction emulation.
Reviewed-by: Wincy Van
Signed-off-by: Wanpeng Li
---
arch/x86/include/asm/vmx.h | 1 +
arch/x86/kvm/vmx.c | 23 ++-
2 files changed, 23 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include
vpid_sync_vcpu_single() still handles vpid01 during nested
vmentry/vmexit since vmx->vpid is used for invvpid. This
patch fix it by specific the vpid02 through __vmx_flush_tlb()
to flush the right vpid.
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/vmx.c | 4 ++--
1 file changed, 2 inserti
On 9/23/15 4:39 PM, Paolo Bonzini wrote:
On 23/09/2015 09:59, Wanpeng Li wrote:
Add the INVVPID instruction emulation.
Reviewed-by: Wincy Van
Signed-off-by: Wanpeng Li
---
arch/x86/include/asm/vmx.h | 1 +
arch/x86/kvm/vmx.c | 23 ++-
2 files changed, 23
Introduce __vmx_flush_tlb() to handle specific vpid.
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/vmx.c | 21 +
1 file changed, 13 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 794c529..7188c5e 100644
--- a/arch/x86/kvm/vmx.c
+++ b
Expose VPID capability to L1.
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/vmx.c | 22 +++---
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index f9219ad..866045c 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
On 9/25/15 12:12 AM, Bandan Das wrote:
Wanpeng Li writes:
Introduce __vmx_flush_tlb() to handle specific vpid.
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/vmx.c | 21 +
1 file changed, 13 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm
On 9/28/15 8:05 PM, Paolo Bonzini wrote:
On 24/09/2015 08:51, Wanpeng Li wrote:
/*
* For nested guests, we don't do anything specific
* for single context invalidation. Hence, only advertise
* support for global co
Expose VPID capability to L1.
Signed-off-by: Wanpeng Li
---
v1 -> v2:
* set only VMX_VPID_EXTENT_GLOBAL_CONTEXT_BIT
arch/x86/kvm/vmx.c | 22 +++---
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 75f3ee0..c4ea
On 9/29/15 6:39 PM, Paolo Bonzini wrote:
On 29/09/2015 04:55, Wanpeng Li wrote:
Expose VPID capability to L1.
Signed-off-by: Wanpeng Li
---
v1 -> v2:
* set only VMX_VPID_EXTENT_GLOBAL_CONTEXT_BIT
Thanks. I've checked more thoroughly your implementation against the
SDM now, and the
L2's vCPUs not sched in/out on L1.
Reviewed-by: Wincy Van
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/vmx.c | 36
1 file changed, 24 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 31d272e..22b4dc7 100644
---
nested VPID
kernelLinux 3.5.0-1 1.2600 1.4300 1.5600 12.7 12.9 3.49000 7.46000
vanilla
Reviewed-by: Jan Kiszka
Reviewed-by: Wincy Van
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/vmx.c | 39 ---
1 file changed, 32 insertions(+), 7 deletions(-)
diff
Add the INVVPID instruction emulation.
Reviewed-by: Wincy Van
Signed-off-by: Wanpeng Li
---
arch/x86/include/asm/vmx.h | 3 +++
arch/x86/kvm/vmx.c | 49 +-
2 files changed, 51 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm
Adjust allocate/free_vid so that they can be reused for the nested vpid.
Reviewed-by: Wincy Van
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/vmx.c | 25 -
1 file changed, 12 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 6407674
201 - 300 of 320 matches
Mail list logo