On 2024-04-19 01:47 PM, James Houghton wrote:
> On Thu, Apr 11, 2024 at 10:28 AM David Matlack wrote:
> > On 2024-04-11 10:08 AM, David Matlack wrote:
> > bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
> > {
> > bool young = false;
> >
>
On 2024-04-01 11:29 PM, James Houghton wrote:
> Only handle the TDP MMU case for now. In other cases, if a bitmap was
> not provided, fallback to the slowpath that takes mmu_lock, or, if a
> bitmap was provided, inform the caller that the bitmap is unreliable.
I think this patch will trigger a loc
On 2024-04-01 11:29 PM, James Houghton wrote:
> Add kvm_arch_prepare_bitmap_age() for architectures to indiciate that
> they support bitmap-based aging in kvm_mmu_notifier_test_clear_young()
> and that they do not need KVM to grab the MMU lock for writing. This
> function allows architectures to do
On 2024-04-01 11:29 PM, James Houghton wrote:
> The bitmap is provided for secondary MMUs to use if they support it. For
> test_young(), after it returns, the bitmap represents the pages that
> were young in the interval [start, end). For clear_young, it represents
> the pages that we wish the seco
On 2024-04-01 11:29 PM, James Houghton wrote:
> This patchset adds a fast path in KVM to test and clear access bits on
> sptes without taking the mmu_lock. It also adds support for using a
> bitmap to (1) test the access bits for many sptes in a single call to
> mmu_notifier_test_young, and to (2)
On Thu, Apr 11, 2024 at 11:00 AM David Matlack wrote:
>
> On Thu, Apr 11, 2024 at 10:28 AM David Matlack wrote:
> >
> > On 2024-04-11 10:08 AM, David Matlack wrote:
> > > On 2024-04-01 11:29 PM, James Houghton wrote:
> > > > Only handle the TDP MMU case f
On Thu, Apr 11, 2024 at 10:28 AM David Matlack wrote:
>
> On 2024-04-11 10:08 AM, David Matlack wrote:
> > On 2024-04-01 11:29 PM, James Houghton wrote:
> > > Only handle the TDP MMU case for now. In other cases, if a bitmap was
> > > not provided, fallback to the sl
On 2024-04-11 10:08 AM, David Matlack wrote:
> On 2024-04-01 11:29 PM, James Houghton wrote:
> > Only handle the TDP MMU case for now. In other cases, if a bitmap was
> > not provided, fallback to the slowpath that takes mmu_lock, or, if a
> > bitmap was provided, inform the c
On 2024-04-01 11:29 PM, James Houghton wrote:
> Only handle the TDP MMU case for now. In other cases, if a bitmap was
> not provided, fallback to the slowpath that takes mmu_lock, or, if a
> bitmap was provided, inform the caller that the bitmap is unreliable.
>
> Suggested-by: Yu Zhao
> Signed-o
On Mon, Jul 2, 2018 at 11:23 PM Wanpeng Li wrote:
>
> From: Wanpeng Li
>
> Implement paravirtual apic hooks to enable PV IPIs.
Very cool. Thanks for working on this!
>
> apic->send_IPI_mask
> apic->send_IPI_mask_allbutself
> apic->send_IPI_allbutself
> apic->send_IPI_all
>
> The PV IPIs support
On Thu, Jul 27, 2017 at 6:54 AM, Paolo Bonzini wrote:
> Since the current implementation of VMCS12 does a memcpy in and out
> of guest memory, we do not need current_vmcs12 and current_vmcs12_page
> anymore. current_vmptr is enough to read and write the VMCS12.
This patch also fixes dirty tracki
On Thu, Feb 16, 2017 at 1:33 AM, Paolo Bonzini wrote:
>
> The FPU is always active now when running KVM.
>
> Signed-off-by: Paolo Bonzini
Reviewed-by: David Matlack
Glad to see this cleanup! Thanks for doing it.
> ---
> arch/x86/include/asm/kvm_host.h | 3 --
>
On Tue, Nov 29, 2016 at 12:40 PM, Kyle Huey wrote:
> We can't return both the pass/fail boolean for the vmcs and the upcoming
> continue/exit-to-userspace boolean for skip_emulated_instruction out of
> nested_vmx_check_vmcs, so move skip_emulated_instruction out of it instead.
>
> Additionally, VM
se the new static_key_deferred_flush() API to flush pending updates on
module unload.
Signed-off-by: David Matlack
---
arch/x86/kvm/lapic.c | 6 ++
arch/x86/kvm/lapic.h | 1 +
arch/x86/kvm/x86.c | 1 +
3 files changed, 8 insertions(+)
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 34a66b2..1b80
Modules that use static_key_deferred need a way to synchronize with
any delayed work that is still pending when the module is unloaded.
Introduce static_key_deferred_flush() which flushes any pending
jump label updates.
Signed-off-by: David Matlack
---
include/linux/jump_label_ratelimit.h | 5
On Wed, Nov 30, 2016 at 2:33 PM, Paolo Bonzini wrote:
> - Original Message -
>> From: "Radim Krčmář"
>> To: "David Matlack"
>> Cc: k...@vger.kernel.org, linux-kernel@vger.kernel.org, jmatt...@google.com,
>> pbonz...@redhat.com
&g
On Wed, Nov 30, 2016 at 3:22 AM, Paolo Bonzini wrote:
>
>
> On 30/11/2016 03:14, David Matlack wrote:
>> This patchset adds support setting the VMX capability MSRs from userspace.
>> This is required for migration of nested-capable VMs to different CPUs and
>>
On Wed, Nov 30, 2016 at 3:16 AM, Paolo Bonzini wrote:
> On 30/11/2016 03:14, David Matlack wrote:
>>
>> /* secondary cpu-based controls */
>> @@ -2868,36 +2865,32 @@ static int vmx_get_vmx_msr(struct kvm_vcpu *vcpu,
>> u32 msr_index, u64 *pdata)
>>
, they do not need to be on
the default MSR save/restore lists. The userspace hypervisor can set
the values of these MSRs or read them from KVM at VCPU creation time,
and restore the same value after every save/restore.
Signed-off-by: David Matlack
---
arch/x86/include/asm/vmx.h | 31 +
arch
s 0. Previously this configuration would succeed and
"IA-32e mode guest" would silently be disabled by KVM.
Signed-off-by: David Matlack
---
arch/x86/kvm/vmx.c | 18 +-
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 4927
MSRs.
This patch also initializes MSR_IA32_CR{0,4}_FIXED1 from the CPU's MSRs
by default. This is a saner than the current default of -1ull, which
includes bits that the host CPU does not support.
Signed-off-by: David Matlack
---
arch/x86/kvm/
verify must-be-0 bits. Fix these checks
to identify must-be-0 bits according to MSR_IA32_VMX_CR{0,4}_FIXED1.
This patch should introduce no change in behavior in KVM, since these
MSRs are still -1ULL.
Signed-off-by: David Matlack
---
arch/x86/kvm/vmx.c | 77
A32_VMX_BASIC,
MSR_IA32_VMX_CR{0,4}_FIXED{0,1}.
* Include VMX_INS_OUTS in MSR_IA32_VMX_BASIC initial value.
David Matlack (5):
KVM: nVMX: generate non-true VMX MSRs based on true versions
KVM: nVMX: support restore of VMX capability MSRs
KVM: nVMX: fix checks on CR{0,4} during virtual VM
uct nested_vmx. This also lets
userspace avoid having to restore the non-true MSRs.
Note this does not preclude emulating MSR_IA32_VMX_BASIC[55]=0. To do so,
we simply need to set all the default1 bits in the true MSRs (such that
the true MSRs and the generated non-true MSRs are equal).
Sig
On Tue, Nov 29, 2016 at 12:01 AM, Paolo Bonzini wrote:
>> On Mon, Nov 28, 2016 at 2:48 PM, Paolo Bonzini wrote:
>> > On 28/11/2016 22:11, David Matlack wrote:
>> >> > PINBASED_CTLS, PROCBASED_CTLS, EXIT_CTLS and ENTRY_CTLS can be derived
>> >> > fr
On Mon, Nov 28, 2016 at 2:48 PM, Paolo Bonzini wrote:
> On 28/11/2016 22:11, David Matlack wrote:
>> > PINBASED_CTLS, PROCBASED_CTLS, EXIT_CTLS and ENTRY_CTLS can be derived
>> > from their "true" counterparts, so I think it's better to remove the
>> >
On Wed, Nov 23, 2016 at 3:28 PM, David Matlack wrote:
> On Wed, Nov 23, 2016 at 2:11 PM, Paolo Bonzini wrote:
>> On 23/11/2016 23:07, David Matlack wrote:
>>> A downside of this scheme is we'd have to remember to update
>>> nested_vmx_cr4_fixed1_update() before
On Wed, Nov 23, 2016 at 3:44 AM, Paolo Bonzini wrote:
> On 23/11/2016 02:14, David Matlack wrote:
>> switch (msr_index) {
>> case MSR_IA32_VMX_BASIC:
>> + return vmx_restore_vmx_basic(vmx, data);
>> + case MSR_IA32_VMX_TRUE_P
On Wed, Nov 23, 2016 at 3:31 AM, Paolo Bonzini wrote:
> On 23/11/2016 02:14, David Matlack wrote:
>> +static bool fixed_bits_valid(u64 val, u64 fixed0, u64 fixed1)
>> +{
>> + return ((val & fixed0) == fixed0) && ((~val & ~fixed1) == ~fixed1
On Wed, Nov 23, 2016 at 3:45 AM, Paolo Bonzini wrote:
>
> On 23/11/2016 02:14, David Matlack wrote:
>> This patchset includes v2 of "KVM: nVMX: support restore of VMX capability
>> MSRs" (patch 1) as well as some additional related patches that came up
>> while p
On Wed, Nov 23, 2016 at 2:11 PM, Paolo Bonzini wrote:
> On 23/11/2016 23:07, David Matlack wrote:
>> A downside of this scheme is we'd have to remember to update
>> nested_vmx_cr4_fixed1_update() before giving VMs new CPUID bits. If we
>> forget, a VM could end up with d
On Wed, Nov 23, 2016 at 11:24 AM, Paolo Bonzini wrote:
>
>
> On 23/11/2016 20:16, David Matlack wrote:
>> > Oh, I thought userspace would do that! Doing it in KVM is fine as well,
>> > but then do we need to give userspace access to CR{0,4}_FIXED{0,1} at all?
>>
> regenerate MSR_IA32_CR4_FIXED1 to match it.
>>
>> Signed-off-by: David Matlack
>
> Oh, I thought userspace would do that! Doing it in KVM is fine as well,
> but then do we need to give userspace access to CR{0,4}_FIXED{0,1} at all?
I think it should be safe for userspa
s 0. Previously this configuration would succeed and
"IA-32e mode guest" would silently be disabled by KVM.
Signed-off-by: David Matlack
---
arch/x86/kvm/vmx.c | 18 +-
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index ac5d
Set MSR_IA32_CR{0,4}_FIXED1 to match the CPU's MSRs.
In addition, MSR_IA32_CR4_FIXED1 should reflect the available CR4 bits
according to CPUID. Whenever guest CPUID is updated by userspace,
regenerate MSR_IA32_CR4_FIXED1 to match it.
Signed-off-by: David Matlack
---
Note: "x86/cpufe
verify must-be-0 bits. Fix these checks
to identify must-be-0 bits according to MSR_IA32_VMX_CR{0,4}_FIXED1.
This patch should introduce no change in behavior in KVM, since these
MSRs are still -1ULL.
Signed-off-by: David Matlack
---
arch/x86/kvm/vmx.c | 68
, they do not need to be on
the default MSR save/restore lists. The userspace hypervisor can set
the values of these MSRs or read them from KVM at VCPU creation time,
and restore the same value after every save/restore.
Signed-off-by: David Matlack
---
arch/x86/include/asm/vmx.h | 31 +
arch
d VM-entry that came up when
testing patches 2 and 3.
Changes since v1:
* Support restoring less-capable versions of MSR_IA32_VMX_BASIC,
MSR_IA32_VMX_CR{0,4}_FIXED{0,1}.
* Include VMX_INS_OUTS in MSR_IA32_VMX_BASIC initial value.
David Matlack (4):
KVM: nVMX: support restore of VMX capabi
uced VM exit checks the cpuid faulting state and the CPL.
> kvm_require_cpl is even kind enough to inject the GP fault for us.
>
> Signed-off-by: Kyle Huey
Reviewed-by: David Matlack
(v10)
On Sun, Nov 6, 2016 at 12:57 PM, Kyle Huey wrote:
> Hardware support for faulting on the cpuid instruction is not required to
> emulate it, because cpuid triggers a VM exit anyways. KVM handles the relevant
> MSRs (MSR_PLATFORM_INFO and MSR_MISC_FEATURES_ENABLE) and upon a
> cpuid-induced VM exit
On Fri, Nov 4, 2016 at 2:57 PM, Paolo Bonzini wrote:
>
> On 04/11/2016 21:34, David Matlack wrote:
>> On Mon, Oct 31, 2016 at 6:37 PM, Kyle Huey wrote:
>>> + case MSR_PLATFORM_INFO:
>>> + /* cpuid faulting is supported */
>
On Mon, Oct 31, 2016 at 6:37 PM, Kyle Huey wrote:
> Hardware support for faulting on the cpuid instruction is not required to
> emulate it, because cpuid triggers a VM exit anyways. KVM handles the relevant
> MSRs (MSR_PLATFORM_INFO and MSR_MISC_FEATURES_ENABLE) and upon a
> cpuid-induced VM exit
On Fri, Sep 9, 2016 at 9:38 AM, Paolo Bonzini wrote:
>
> On 09/09/2016 00:13, David Matlack wrote:
>> Hi Paolo,
>>
>> On Tue, Sep 6, 2016 at 3:29 PM, Paolo Bonzini wrote:
>>> Bad things happen if a guest using the TSC deadline timer is migrated.
>>> Th
Hi Paolo,
On Tue, Sep 6, 2016 at 3:29 PM, Paolo Bonzini wrote:
> Bad things happen if a guest using the TSC deadline timer is migrated.
> The guest doesn't re-calibrate the TSC after migration, and the
> TSC frequency can and will change unless your processor supports TSC
> scaling (on Intel this
On Thu, Jul 14, 2016 at 1:33 AM, Paolo Bonzini wrote:
>
>
> On 14/07/2016 02:16, David Matlack wrote:
>> KVM maintains L1's current VMCS in guest memory, at the guest physical
>> page identified by the argument to VMPTRLD. This makes hairy
>> time-of-check to ti
so flush during VMXOFF, which is not mandated by the spec,
but also not in conflict with the spec.
Signed-off-by: David Matlack
---
arch/x86/kvm/vmx.c | 31 ---
1 file changed, 28 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 6
On Tue, Jul 5, 2016 at 10:36 AM, Paolo Bonzini wrote:
> Bad things happen if a guest using the TSC deadline timer is migrated.
> The guest doesn't re-calibrate the TSC after migration, and the
> TSC frequency can and will change unless your processor supports TSC
> scaling (on Intel this is only S
On Thu, Jun 30, 2016 at 1:54 PM, Radim Krčmář wrote:
> KVM_CAP_X2APIC_API can be enabled to extend APIC ID in get/set ioctl and MSI
> addresses to 32 bits. Both are needed to support x2APIC.
>
> The capability has to be toggleable and disabled by default, because get/set
> ioctl shifted and trunc
On Thu, Jun 16, 2016 at 9:47 AM, Paolo Bonzini wrote:
> On 16/06/2016 18:43, David Matlack wrote:
>> On Thu, Jun 16, 2016 at 1:21 AM, Paolo Bonzini wrote:
>>> This gains ~20 clock cycles per vmexit. On Intel there is no need
>>> anymore to enable the interrupts
On Thu, Jun 16, 2016 at 1:21 AM, Paolo Bonzini wrote:
> This gains ~20 clock cycles per vmexit. On Intel there is no need
> anymore to enable the interrupts in vmx_handle_external_intr, since we
> are using the "acknowledge interrupt on exit" feature. AMD needs to do
> that temporarily, and must
On Tue, May 24, 2016 at 4:11 PM, Wanpeng Li wrote:
> 2016-05-25 6:38 GMT+08:00 David Matlack :
>> On Tue, May 24, 2016 at 12:57 AM, Wanpeng Li wrote:
>>> From: Wanpeng Li
>>>
>>> If an emulated lapic timer will fire soon(in the scope of 10us the
>>&g
#x27;d prefer to
only add more polling when the gain is clear. If there are guest
workloads that want this patch, I'd suggest polling for timers be
default-off. At minimum, there should be a module parameter to control
it (like Christian Borntraeger suggested).
>
> Cc: Paolo Bonzini
> C
On Mon, May 23, 2016 at 6:13 PM, Yang Zhang wrote:
> On 2016/5/24 2:04, David Matlack wrote:
>>
>> On Sun, May 22, 2016 at 6:26 PM, Yang Zhang
>> wrote:
>>>
>>> On 2016/5/21 2:37, David Matlack wrote:
>>>>
>>>>
>>>
On Sun, May 22, 2016 at 6:26 PM, Yang Zhang wrote:
> On 2016/5/21 2:37, David Matlack wrote:
>>
>> It's not obvious to me why polling for a timer interrupt would improve
>> context switch latency. Can you explain a bit more?
>
>
> We have a workload which using
erf TCP get ~6% bandwidth improvement.
I think my question got lost in the previous thread :). Can you
explain why TCP bandwidth improves with this patch?
>
> Cc: Paolo Bonzini
> Cc: Radim Krčmář
> Cc: David Matlack
> Cc: Christian Borntraeger
> Cc: Yang Zhang
> Signed
On Thu, May 19, 2016 at 7:04 PM, Yang Zhang wrote:
> On 2016/5/20 2:36, David Matlack wrote:
>>
>> On Thu, May 19, 2016 at 11:01 AM, David Matlack
>> wrote:
>>>
>>> On Thu, May 19, 2016 at 6:27 AM, Wanpeng Li wrote:
>>>>
>>>> From:
On Thu, May 19, 2016 at 11:01 AM, David Matlack wrote:
> On Thu, May 19, 2016 at 6:27 AM, Wanpeng Li wrote:
>> From: Wanpeng Li
>>
>> If an emulated lapic timer will fire soon(in the scope of 10us the
>> base of dynamic halt-polling, lower-end of message passing w
7;t poll at
all.
>
> iperf TCP get ~6% bandwidth improvement.
Can you explain why your patch results in this bandwidth improvement?
>
> Cc: Paolo Bonzini
> Cc: Radim Krčmář
> Cc: David Matlack
> Cc: Christian Borntraeger
> Signed-off-by: Wanpeng Li
> ---
> v1 -&g
On Fri, Apr 22, 2016 at 12:30 AM, Wanpeng Li wrote:
> Hi Paolo and David,
> 2016-03-31 3:24 GMT+08:00 David Matlack :
>>
>> kernel_fpu_begin() saves the current fpu context. If this uses
>> XSAVE[OPT], it may leave the xsave area in an undesirable state.
>> Accordin
On Fri, Apr 8, 2016 at 9:50 AM, Paolo Bonzini wrote:
>
>
> On 08/04/2016 18:25, David Matlack wrote:
>> On Thu, Apr 7, 2016 at 12:03 PM, Paolo Bonzini wrote:
>>>>
>>>> Thank you :). Let me know how testing goes.
>>>
>>> It went well.
&g
On Thu, Apr 7, 2016 at 12:03 PM, Paolo Bonzini wrote:
>>
>> Thank you :). Let me know how testing goes.
>
> It went well.
Great! How should we proceed?
On Thu, Apr 7, 2016 at 2:08 AM, Paolo Bonzini wrote:
>
>
> On 05/04/2016 17:56, David Matlack wrote:
>> On Tue, Apr 5, 2016 at 4:28 AM, Paolo Bonzini wrote:
>>>
>> ...
>>>
>>> While running my acceptance tests, in one case I got one CPU whose xcr0
On Tue, Apr 5, 2016 at 4:28 AM, Paolo Bonzini wrote:
>
...
>
> While running my acceptance tests, in one case I got one CPU whose xcr0
> had leaked into the host. This showed up as a SIGILL in strncasecmp's
> AVX code, and a simple program confirmed it:
>
> $ cat xgetbv.c
> #include
>
mit 653f52c ("kvm,x86: load guest FPU context more eagerly")
from 4.2 forces the guest's fpu to always be loaded on eagerfpu hosts.
This patch fixes the bug by keeping the host's xcr0 loaded outside
of the interrupts-disabled region where KVM switches into guest mode.
Cc: sta.
On Tue, Mar 29, 2016 at 8:57 AM, Paolo Bonzini wrote:
>
> Windows lets applications choose the frequency of the timer tick,
> and in Windows 10 the maximum rate was changed from 1024 Hz to
> 2048 Hz. Unfortunately, because of the way the Windows API
> works, most applications who need a higher ra
On Tue, Mar 15, 2016 at 8:48 PM, Andy Lutomirski wrote:
>
> Why is it safe to rely on interrupted_kernel_fpu_idle? That function
> is for interrupts, but is there any reason that KVM can't be preempted
> (or explicitly schedule) with XCR0 having some funny value?
KVM restores the host's xcr0 in
On Tue, Mar 15, 2016 at 8:43 PM, Xiao Guangrong
wrote:
>
>
> On 03/16/2016 03:01 AM, David Matlack wrote:
>>
>> On Mon, Mar 14, 2016 at 12:46 AM, Xiao Guangrong
>> wrote:
>>>
>>> On 03/12/2016 04:47 AM, David Matlack wrote:
>>>
>>&
VM, which is exposed by nested
> VPID support; RHEL6 KVM uses single-context invvpid unconditionally,
> but until now KVM did not provide it.
>
> Paolo
>
For the series,
Reviewed-by: David Matlack
> Paolo Bonzini (3):
> KVM: VMX: avoid guest hang on invalid invept instruction
On Fri, Mar 18, 2016 at 10:58 AM, Paolo Bonzini wrote:
>
>
> On 18/03/2016 18:42, David Matlack wrote:
>> On Fri, Mar 18, 2016 at 9:09 AM, Paolo Bonzini wrote:
>>> Patches 1 and 2 fix two cases where a guest could hang at 100% CPU
>>> due to mis-emulati
On Mon, Mar 14, 2016 at 12:46 AM, Xiao Guangrong
wrote:
>
>
> On 03/12/2016 04:47 AM, David Matlack wrote:
>
>> I have not been able to trigger this bug on Linux 4.3, and suspect
>> it is due to this commit from Linux 4.2:
>>
>> 653f52c kvm,x86: load guest FPU
On Fri, Mar 11, 2016 at 1:14 PM, Andy Lutomirski wrote:
>
> On Fri, Mar 11, 2016 at 12:47 PM, David Matlack wrote:
> > From: Eric Northup
> >
> > Add a percpu boolean, tracking whether a KVM vCPU is running on the
> > host CPU. KVM will set and clear it
We've found that an interrupt handler that uses the fpu can kill a KVM
VM, if it runs under the following conditions:
- the guest's xcr0 register is loaded on the cpu
- the guest's fpu context is not loaded
- the host is using eagerfpu
Note that the guest's xcr0 register and fpu context are not
From: Eric Northup
Add a percpu boolean, tracking whether a KVM vCPU is running on the
host CPU. KVM will set and clear it as it loads/unloads guest XCR0.
(Note that the rest of the guest FPU load/restore is safe, because
kvm_load_guest_fpu and kvm_put_guest_fpu call __kernel_fpu_begin()
and __k
ll_ns=11000:
... kvm:kvm_halt_poll_ns: vcpu 0: halt_poll_ns 0 (shrink 1)
... kvm:kvm_halt_poll_ns: vcpu 0: halt_poll_ns 1 (grow 0)
... kvm:kvm_halt_poll_ns: vcpu 0: halt_poll_ns 2 (grow 1)
Signed-off-by: David Matlack
---
virt/kvm/kvm_main.c | 3 +++
1 file changed, 3 insertions(+)
diff --git
;t affect host tracing too much.
> We also don't need to switch MSR_IA32_PEBS_ENABLE on VMENTRY, but that
> optimization isn't worth its code, IMO.
>
> (If you are implementing PEBS for guests, be sure to handle the case
> where both host and guest enable PEBS, because th
On Thu, Mar 3, 2016 at 10:53 AM, Radim Krčmář wrote:
> Linux guests on Haswell (and also SandyBridge and Broadwell, at least)
> would crash if you decided to run a host command that uses PEBS, like
> perf record -e 'cpu/mem-stores/pp' -a
>
> This happens because KVM is using VMX MSR switching to
On Wed, Oct 14, 2015 at 6:33 PM, Wu, Feng wrote:
>
>> -Original Message-
>> From: David Matlack [mailto:dmatl...@google.com]
>> Sent: Thursday, October 15, 2015 7:41 AM
>> To: Wu, Feng
>> Cc: Paolo Bonzini ; alex.william...@redhat.com; Joerg
>>
Hi Feng.
On Fri, Sep 18, 2015 at 7:29 AM, Feng Wu wrote:
> This patch updates the Posted-Interrupts Descriptor when vCPU
> is blocked.
>
> pre-block:
> - Add the vCPU to the blocked per-CPU list
> - Set 'NV' to POSTED_INTR_WAKEUP_VECTOR
>
> post-block:
> - Remove the vCPU from the per-CPU list
I
On Mon, Oct 5, 2015 at 12:53 PM, Radim Krčmář wrote:
> 2015-09-28 13:38+0800, Haozhong Zhang:
>> Both VMX and SVM propagate virtual_tsc_khz in the same way, so this
>> patch removes the call-back set_tsc_khz() and replaces it with a common
>> function.
>>
>> Signed-off-by: Haozhong Zhang
>> ---
>
gt; attempted polling compared to the successful polls.
Reviewed-by: David Matlack
>
> Cc: Christian Borntraeger Cc: David Matlack
> Signed-off-by: Paolo Bonzini
> ---
> arch/arm/include/asm/kvm_host.h | 1 +
> arch/arm64/include/asm/kvm_host.h | 1 +
> arch/mi
On Fri, Sep 4, 2015 at 6:23 AM, Sudip Mukherjee
wrote:
> These variables were only assigned some values but they were never used.
>
> Signed-off-by: Sudip Mukherjee
> ---
> drivers/staging/slicoss/slicoss.c | 27 ++-
> 1 file changed, 6 insertions(+), 21 deletions(-)
>
>
On Thu, Sep 3, 2015 at 2:23 AM, Wanpeng Li wrote:
>
> How about something like:
>
> @@ -1941,10 +1976,14 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
> */
> if (kvm_vcpu_check_block(vcpu) < 0) {
> ++vcpu->stat.halt_successful_poll;
> -
On Wed, Sep 2, 2015 at 12:12 PM, Paolo Bonzini wrote:
>
>
> On 02/09/2015 20:09, David Matlack wrote:
>> On Wed, Sep 2, 2015 at 12:29 AM, Wanpeng Li wrote:
>>> There is a downside of always-poll since poll is still happened for idle
>>> vCPUs which can was
On Wed, Sep 2, 2015 at 12:29 AM, Wanpeng Li wrote:
> v5 -> v6:
> * fix wait_ns and poll_ns
Thanks for bearing with me through all the reviews. I think it's on the
verge of being done :). There are just few small things to fix.
>
> v4 -> v5:
> * set base case 10us and max poll time 500us
> * h
On Wed, Sep 2, 2015 at 12:42 AM, Wanpeng Li wrote:
> Tracepoint for dynamic halt_pool_ns, fired on every potential change.
>
> Signed-off-by: Wanpeng Li
> ---
> include/trace/events/kvm.h | 30 ++
> virt/kvm/kvm_main.c| 8 ++--
> 2 files changed, 36 inser
nd get close
> to no-polling overhead levels by using the dynamic-poll. The savings
> should be even higher for higher frequency ticks.
>
> Suggested-by: David Matlack
> Signed-off-by: Wanpeng Li
> ---
> virt/kvm/kvm_main.c | 61
> +
On Tue, Sep 1, 2015 at 5:29 PM, Wanpeng Li wrote:
> On 9/2/15 7:24 AM, David Matlack wrote:
>>
>> On Tue, Sep 1, 2015 at 3:58 PM, Wanpeng Li wrote:
>>>
>>> Why this can happen?
>>
>> Ah, probably because I'm missing 9c8fd1ba220 (KVM: x86: optimi
On Tue, Sep 1, 2015 at 3:58 PM, Wanpeng Li wrote:
> On 9/2/15 6:34 AM, David Matlack wrote:
>>
>> On Tue, Sep 1, 2015 at 3:30 PM, Wanpeng Li wrote:
>>>
>>> On 9/2/15 5:45 AM, David Matlack wrote:
>>>>
>>>> On Thu, Aug 27, 2015
On Tue, Sep 1, 2015 at 3:30 PM, Wanpeng Li wrote:
> On 9/2/15 5:45 AM, David Matlack wrote:
>>
>> On Thu, Aug 27, 2015 at 2:47 AM, Wanpeng Li
>> wrote:
>>>
>>> v3 -> v4:
>>> * bring back grow vcpu->halt_poll_ns when interrupt arrives and
On Thu, Aug 27, 2015 at 2:47 AM, Wanpeng Li wrote:
> v3 -> v4:
> * bring back grow vcpu->halt_poll_ns when interrupt arrives and shrinks
>when idle VCPU is detected
>
> v2 -> v3:
> * grow/shrink vcpu->halt_poll_ns by *halt_poll_ns_grow or
> /halt_poll_ns_shrink
> * drop the macros and hard
should however keep halt_poll_ns below 1 ms since that is the tick
frequency used by windows.
David Matlack (1):
kvm: adaptive halt-polling toggle
Wanpeng Li (1):
KVM: make halt_poll_ns per-VCPU
include/linux/kvm_host.h | 1 +
include/trace/events/kvm.h | 23 ++
virt
r higher frequency ticks.
Signed-off-by: David Matlack
---
include/trace/events/kvm.h | 23 ++
virt/kvm/kvm_main.c| 110 ++---
2 files changed, 97 insertions(+), 36 deletions(-)
diff --git a/include/trace/events/kvm.h b/include/trace/events/kvm
From: Wanpeng Li
Change halt_poll_ns into per-VCPU variable, seeded from module parameter,
to allow greater flexibility.
Signed-off-by: Wanpeng Li
---
include/linux/kvm_host.h | 1 +
virt/kvm/kvm_main.c | 5 +++--
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/include/linu
On Thu, Aug 27, 2015 at 2:59 AM, Wanpeng Li wrote:
> Hi David,
> On 8/26/15 1:19 AM, David Matlack wrote:
>>
>> Thanks for writing v2, Wanpeng.
>>
>> On Mon, Aug 24, 2015 at 11:35 PM, Wanpeng Li
>> wrote:
>>>
>>> There is a downside of hal
Thanks for writing v2, Wanpeng.
On Mon, Aug 24, 2015 at 11:35 PM, Wanpeng Li wrote:
> There is a downside of halt_poll_ns since poll is still happen for idle
> VCPU which can waste cpu usage. This patch adds the ability to adjust
> halt_poll_ns dynamically.
What testing have you done with these
On Mon, Aug 24, 2015 at 5:53 AM, Wanpeng Li wrote:
> There are two new kernel parameters for changing the halt_poll_ns:
> halt_poll_ns_grow and halt_poll_ns_shrink. halt_poll_ns_grow affects
> halt_poll_ns when an interrupt arrives and halt_poll_ns_shrink
> does it when idle VCPU is detected.
>
>
On Mon, Aug 24, 2015 at 5:53 AM, Wanpeng Li wrote:
> Change halt_poll_ns into per-VCPU variable, seeded from module parameter,
> to allow greater flexibility.
You should also change kvm_vcpu_block to read halt_poll_ns from
the vcpu instead of the module parameter.
>
> Signed-off-by: Wanpeng Li
Hi Vikul, welcome! See my comment below...
On Fri, Jun 26, 2015 at 12:57 PM, Vikul Gupta wrote:
> I am a high school student trying to become familiar with the opensource
> process and linux kernel. This is my first submission to the mailing list.
>
> I fixed the slicoss sub-system. The TODO file
On Sat, May 30, 2015 at 3:59 AM, Xiao Guangrong
wrote:
> It walks all MTRRs and gets all the memory cache type setting for the
> specified range also it checks if the range is fully covered by MTRRs
>
> Signed-off-by: Xiao Guangrong
> ---
> arch/x86/kvm/mtrr.c | 183
> ++
On Sat, May 30, 2015 at 3:59 AM, Xiao Guangrong
wrote:
> It gets the range for the specified variable MTRR
>
> Signed-off-by: Xiao Guangrong
> ---
> arch/x86/kvm/mtrr.c | 19 +--
> 1 file changed, 13 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/mtrr.c b/arch/x86/kv
1 - 100 of 169 matches
Mail list logo