On Tue, Sep 17, 2019 at 1:16 AM Wanpeng Li wrote:
>
> From: Wanpeng Li
>
> Reported by syzkaller:
>
> #PF: supervisor write access in kernel mode
> #PF: error_code(0x0002) - not-present page
> PGD 403c01067 P4D 403c01067 PUD 0
> Oops: 0002 [#1] SMP PTI
> CP
On Thu, Sep 26, 2019 at 7:17 PM Yang Weijiang wrote:
>
> The control bits in IA32_XSS MSR are being used for new features,
> but current CPUID(0xd,i) enumeration code doesn't support them, so
> fix existing code first.
>
> The supervisor states in IA32_XSS haven't been used in public
> KVM code, s
On Thu, Sep 26, 2019 at 7:17 PM Yang Weijiang wrote:
>
> CET(Control-flow Enforcement Technology) is an upcoming Intel(R)
> processor feature that blocks Return/Jump-Oriented Programming(ROP)
> attacks. It provides the following capabilities to defend
> against ROP/JOP style control-flow subversio
On Thu, Sep 26, 2019 at 7:17 PM Yang Weijiang wrote:
>
> CET MSRs pass through Guest directly to enhance performance.
> CET runtime control settings are stored in MSR_IA32_{U,S}_CET,
> Shadow Stack Pointer(SSP) are stored in MSR_IA32_PL{0,1,2,3}_SSP,
> SSP table base address is stored in MSR_IA32_
On Thu, Sep 26, 2019 at 7:17 PM Yang Weijiang wrote:
>
> "Load Guest CET state" bit controls whether Guest CET states
> will be loaded at Guest entry. Before doing that, KVM needs
> to check if CPU CET feature is enabled on host and available
> to Guest.
>
> Note: SHSTK and IBT features share one
On Thu, Sep 26, 2019 at 7:17 PM Yang Weijiang wrote:
>
> CR4.CET(bit 23) is master enable bit for CET feature.
> Previously, KVM did not support setting any bits in XSS
> so it's hardcoded to check and inject a #GP if Guest
> attempted to write a non-zero value to XSS, now it supports
> CET relate
On Thu, Sep 26, 2019 at 7:17 PM Yang Weijiang wrote:
>
> From: Sean Christopherson
>
> A handful of CET MSRs are not context switched through "traditional"
> methods, e.g. VMCS or manual switching, but rather are passed through
> to the guest and are saved and restored by XSAVES/XRSTORS, i.e. the
On Thu, Sep 26, 2019 at 7:17 PM Yang Weijiang wrote:
>
> There're two different places storing Guest CET states, the states
> managed with XSAVES/XRSTORS, as restored/saved
> in previous patch, can be read/write directly from/to the MSRs.
> For those stored in VMCS fields, they're access via vmcs_
On Thu, Sep 26, 2019 at 7:17 PM Yang Weijiang wrote:
>
> Control-flow Enforcement Technology (CET) provides protection against
> Return/Jump-Oriented Programming (ROP/JOP) attack. It includes two
> sub-features: Shadow Stack (SHSTK) and Indirect Branch Tracking (IBT).
>
> KVM modification is requi
On Thu, Oct 3, 2019 at 5:59 AM Yang Weijiang wrote:
>
> On Wed, Oct 02, 2019 at 03:40:20PM -0700, Jim Mattson wrote:
> > On Thu, Sep 26, 2019 at 7:17 PM Yang Weijiang
> > wrote:
> > >
> > > Control-flow Enforcement Technology (CET) provides protectio
*within* the reserved range.
> filtering of msrs_to_save array and would be rejected by KVM_GET/SET_MSR.
> To avoid this, cut the list to whatever CPUID reports for the host's
> architectural PMU.
>
> Reported-by: Vitaly Kuznetsov
> Suggested-by: Vitaly Kuznetsov
> Cc: Ji
On Thu, Oct 3, 2019 at 10:38 AM Paolo Bonzini wrote:
>
> On 03/10/19 19:20, Jim Mattson wrote:
> > You've truncated the list I originally provided, so I think this need
> > only go to MSR_ARCH_PERFMON_PERFCTR0 + 17. Otherwise, we could lose
> > some valuable MSRs.
&g
ed
> VM-Enter is a simpler overall implementation.
>
> Cc: sta...@vger.kernel.org
> Reported-and-tested-by: Reto Buerki
> Tested-by: Vitaly Kuznetsov
> Reviewed-by: Liran Alon
> Signed-off-by: Sean Christopherson
Reviewed-by: Jim Mattson
test.
>
> [*] https://patchwork.kernel.org/patch/11124749/
>
> Reported-by: Nadav Amit
> Cc: sta...@vger.kernel.org
> Signed-off-by: Sean Christopherson
Reviewed-by: Jim Mattson
On Mon, Oct 7, 2019 at 11:57 PM Paolo Bonzini wrote:
>
> On 07/10/19 21:56, Sean Christopherson wrote:
> > On Mon, Oct 07, 2019 at 07:12:37PM +0200, Paolo Bonzini wrote:
> >> On 04/10/19 23:56, Sean Christopherson wrote:
> >>> diff --git a/arch/x86/kernel/cpu/proc.c b/arch/x86/kernel/cpu/proc.c
>
On Tue, Oct 8, 2019 at 11:08 AM Vitaly Kuznetsov wrote:
>
> Commit 204c91eff798a ("KVM: selftests: do not blindly clobber registers in
> guest asm") was intended to make test more gcc-proof, however, the result
> is exactly the opposite: on newer gccs (e.g. 8.2.1) the test breaks with
>
> Te
On Tue, Oct 8, 2019 at 11:41 PM Yang Weijiang wrote:
>
> On Wed, Oct 02, 2019 at 11:54:26AM -0700, Jim Mattson wrote:
> > On Thu, Sep 26, 2019 at 7:17 PM Yang Weijiang
> > wrote:
> > > + if (cet_on)
> > > +
ASID value. Use a mutex to
> serialize access to the SEV ASID bitmap.
>
> Fixes: 1654efcbc431 ("KVM: SVM: Add KVM_SEV_INIT command")
> Tested-by: David Rientjes
> Signed-off-by: Tom Lendacky
Reviewed-by: Jim Mattson
On Tue, Sep 17, 2019 at 1:52 AM Yang Weijiang wrote:
>
> Check SPP capability in MSR_IA32_VMX_PROCBASED_CTLS2, its 23-bit
> indicates SPP capability. Enable SPP feature bit in CPU capabilities
> bitmap if it's supported.
>
> Co-developed-by: He Chen
> Signed-off-by: He Chen
> Co-developed-by: Zh
opherson
Reviewed-by: Jim Mattson
including the case
> where IA32_FEATURE_CONTROL isn't supported.
>
> Signed-off-by: Sean Christopherson
Reviewed-by: Jim Mattson
gt; Signed-off-by: Sean Christopherson
Reviewed-by: Jim Mattson
CPU's 64-bit mode is determined
> directly from EFER_LMA and the VMCS checks are based on that, which
> matches section 26.2.4 of the SDM.
>
> Cc: Sean Christopherson
> Cc: Jim Mattson
> Cc: Krish Sadhukhan
> Fixes: 5845038c111db27902bc220a4f70070fe945871c
> Signed-of
On Wed, Sep 25, 2019 at 2:37 PM Sebastian Andrzej Siewior
wrote:
>
> I was surprised to see that the guest reported `fxsave_leak' while the
> host did not. After digging deeper I noticed that the bits are simply
> masked out during enumeration.
> The XSAVEERPTR feature is actually a bug fix on AMD
On Wed, Sep 25, 2019 at 2:37 PM Sebastian Andrzej Siewior
wrote:
>
> In commit
>55412b2eda2b7 ("kvm: x86: Add kvm_x86_ops hook that enables XSAVES for
> guest")
>
> XSAVES was enabled on VMX with a few additional tweaks and was always
> disabled on SVM. Before ZEN XSAVES was not available so
On Thu, Sep 26, 2019 at 2:43 PM Sean Christopherson
wrote:
>
> Write the desired L2 CR3 into vmcs02.GUEST_CR3 during nested VM-Enter
> isntead of deferring the VMWRITE until vmx_set_cr3(). If the VMWRITE
> is deferred, then KVM can consume a stale vmcs02.GUEST_CR3 when it
> refreshes vmcs12->gues
x27;s not obvious that '194' here is the failed MSR index and that
> it's printed in hex. Change that.
>
> Suggested-by: Sean Christopherson
> Signed-off-by: Vitaly Kuznetsov
Reviewed-by: Jim Mattson
On Mon, Aug 19, 2019 at 8:18 AM Paolo Bonzini wrote:
>
> On 16/08/19 23:45, Jim Mattson wrote:
> > On Thu, Aug 15, 2019 at 12:41 AM Paolo Bonzini wrote:
> >>
> >> The AMD_* bits have to be set from the vendor-independent
> >> feature and bug flags, becaus
e index from the provided ECX
value), so I'd suggest s/rdpmc_idx/rdpmc_ecx/g.
Reviewed-by: Jim Mattson
On Tue, Oct 8, 2019 at 11:13 PM Yang Weijiang wrote:
>
> On Wed, Oct 02, 2019 at 11:18:32AM -0700, Jim Mattson wrote:
> > On Thu, Sep 26, 2019 at 7:17 PM Yang Weijiang
> > wrote:
> > >
> > > CET MSRs pass through Guest directly to enhance performance.
&g
On Tue, Sep 17, 2019 at 1:52 AM Yang Weijiang wrote:
>
> EPT-Based Sub-Page write Protection(SPP)is a HW capability which allows
> Virtual Machine Monitor(VMM) to specify write-permission for guest
> physical memory at a sub-page(128 byte) granularity. When this
> capability is enabled, the CPU en
On Wed, Oct 9, 2019 at 6:28 PM Yang Weijiang wrote:
>
> On Wed, Oct 09, 2019 at 04:08:50PM -0700, Jim Mattson wrote:
> > On Tue, Oct 8, 2019 at 11:41 PM Yang Weijiang
> > wrote:
> > >
> > > On Wed, Oct 02, 2019 at 11:54:26AM -0700, Jim Mattson wrote:
> >
On Fri, Oct 11, 2019 at 12:48 AM Yang Weijiang wrote:
>
> On Thu, Oct 10, 2019 at 02:42:51PM -0700, Jim Mattson wrote:
> > On Tue, Sep 17, 2019 at 1:52 AM Yang Weijiang
> > wrote:
> > >
> > > EPT-Based Sub-Page write Protection(SPP)is a HW capability which
On Tue, Sep 17, 2019 at 1:52 AM Yang Weijiang wrote:
>
> Co-developed-by: yi.z.zh...@linux.intel.com
> Signed-off-by: yi.z.zh...@linux.intel.com
> Signed-off-by: Yang Weijiang
> ---
> Documentation/virtual/kvm/spp_kvm.txt | 178 ++
> 1 file changed, 178 insertions(+)
> c
a local
> variable.
>
> Opportunistically rename the variables in load_vmcs12_host_state() and
> vmx_set_nested_state() to call out that they're ignored, set exit_reason
> on demand on nested VM-Enter failure, and add a comment in
> nested_vmx_load_msr() to call out that returning 'i + 1' can't wrap.
>
> No functional change intended.
>
> Reported-by: Vitaly Kuznetsov
> Cc: Jim Mattson
> Signed-off-by: Sean Christopherson
Reviewed-by: Jim Mattson
MI should take priority over VM-Exit on NMI/INTR, and NMI that is
> injected into L2 should take priority over VM-Exit INTR. This will also
> be addressed in a future patch.
>
> Fixes: b6b8a1451fc4 ("KVM: nVMX: Rework interception of IRQs and NMIs")
> Reported-by: Jim Mattso
mption timer left pending. Because
> no window opened, L2 is free to run uninterrupted.
>
> Fixes: f4124500c2c13 ("KVM: nVMX: Fully emulate preemption timer")
> Reported-by: Jim Mattson
> Cc: Oliver Upton
> Cc: Peter Shier
> Signed-off-by: Sean Christopherson
Reviewed-by: Jim Mattson
;KVM: nVMX: Fully emulate preemption timer")
> Signed-off-by: Sean Christopherson
Reviewed-by: Jim Mattson
>
> Signed-off-by: Sean Christopherson
Reviewed-by: Jim Mattson
On Wed, Apr 22, 2020 at 7:26 PM Sean Christopherson
wrote:
>
> Expose nested_exit_on_nmi() for use by vmx_nmi_allowed() in a future
> patch.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson
Reviewed-by: Jim Mattson
On Wed, Apr 22, 2020 at 7:26 PM Sean Christopherson
wrote:
>
> Report NMIs as allowed when the vCPU is in L2 and L2 is being run with
> Exit-on-NMI enabled, as NMIs are always unblocked from L1's perspective
> in this case.
>
> Signed-off-by: Sean Christopherson
Reviewed-by: Jim Mattson
> Signed-off-by: Sean Christopherson
Reviewed-by: Jim Mattson
"KVM: nVMX: Rework interception of IRQs and NMIs")
> Signed-off-by: Sean Christopherson
Reviewed-by: Jim Mattson
On Wed, Apr 22, 2020 at 7:26 PM Sean Christopherson
wrote:
>
> Check for an unblocked SMI in vmx_check_nested_events() so that pending
> SMIs are correctly prioritized over IRQs and NMIs when the latter events
> will trigger VM-Exit. This also fixes an issue where an SMI that was
> marked pending
t; the conflict. Bailing early isn't problematic (quite the opposite), but
> suppressing the WARN is undesirable as it could mask a bug elsewhere in
> KVM.
>
> Signed-off-by: Sean Christopherson
Reviewed-by: Jim Mattson
in the VM-Exit path.
>
> Hoist the WARN in handle_invalid_guest_state() up to vmx_handle_exit()
> to enforce the above assumption for the !enable_vnmi case, and to detect
> any other potential bugs with nested VM-Enter.
>
> Signed-off-by: Sean Christopherson
Reviewed-by: Jim Mattson
kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu),
> - false);
> - kvm_x86_ops.set_irq(vcpu);
> - }
> + } else if (kvm_cpu_has_injectable_intr(vcpu) &&
> + kvm_x86_ops.interrupt_injection_allowed(vcpu)) {
> + kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu), false);
> + kvm_x86_ops.set_irq(vcpu);
> }
So, that's what this mess was all about! Well, this certainly looks better.
Reviewed-by: Jim Mattson
igned-off-by: Sean Christopherson
Reviewed-by: Jim Mattson
On Tue, Apr 28, 2020 at 3:59 PM Sean Christopherson
wrote:
>
> On Tue, Apr 28, 2020 at 03:04:02PM -0700, Jim Mattson wrote:
> > On Wed, Apr 22, 2020 at 7:26 PM Sean Christopherson
> > wrote:
> > >
> > > Check for an unblocked SMI in vmx_check_nested_eve
On Tue, Apr 28, 2020 at 3:59 PM Sean Christopherson
wrote:
>
> On Tue, Apr 28, 2020 at 03:04:02PM -0700, Jim Mattson wrote:
> > On Wed, Apr 22, 2020 at 7:26 PM Sean Christopherson
> > wrote:
> > >
> > > Check for an unblocked SMI in vmx_check_nested_eve
On Tue, Apr 28, 2020 at 4:10 PM Sean Christopherson
wrote:
>
> Patch 1 is a "fix" for handling SYSENTER_EIP/ESP in L2 on a 32-bit vCPU.
> The primary motivation is to provide consistent behavior after patch 2.
>
> Patch 2 is essentially a re-submission of a nested VMX optimization to
> avoid redun
On Thu, Aug 3, 2017 at 6:23 PM Wanpeng Li wrote:
> Thanks Radim. :) In addition, I will think more about it and figure
> out a finial solution.
Have you had any thoughts on a final solution? We're seeing incorrect
behavior with an L1 hypervisor running under qemu with "-machine
q35,kernel-irqchi
y: 0x6,
> model: 0xf, stepping: 0x6") don't have it. Add the missing check.
>
> Reported-by: Zdenek Kaspar
> Tested-by: Zdenek Kaspar
> Signed-off-by: Vitaly Kuznetsov
Reviewed-by: Jim Mattson
On Wed, Jan 23, 2019 at 6:06 AM Yang Weijiang wrote:
> Note: Although these VMCS fields are 64-bit, they don't have high fields.
This statement directly contradicts the SDM, volume 3, appendix B.2:
"A value of 1 in bits 14:13 of an encoding indicates a 64-bit field.
There are 64-bit fields only
On Tue, Jan 29, 2019 at 9:47 AM Jim Mattson wrote:
>
> On Wed, Jan 23, 2019 at 6:06 AM Yang Weijiang wrote:
> > Note: Although these VMCS fields are 64-bit, they don't have high fields.
>
> This statement directly contradicts the SDM, volume 3, appendix B.2:
>
> &quo
proposes changing to
> kzalloc for defense in depth.
>
> Tested: rebuilt but not tested, since this is an RFC
>
> Reported-by: syzbot+ded1696f6b50b615b...@syzkaller.appspotmail.com
> Signed-off-by: Tom Roeder
Reviewed-by: Jim Mattson
On Mon, Dec 4, 2017 at 5:07 PM Brijesh Singh wrote:
>
> On AMD platforms, under certain conditions insn_len may be zero on #NPF.
> This can happen if a guest gets a page-fault on data access but the HW
> table walker is not able to read the instruction page (e.g instruction
> page is not present i
at soon. However, he has
> > not been very much involved in upstream KVM development for some time,
> > and in the immediate future he is still going to help maintain kvm/queue
> > while I am on vacation. Since not much is going to change, I will let
> > him decide whether he
On Wed, Jul 31, 2019 at 9:37 AM Paolo Bonzini wrote:
>
> On 31/07/19 15:50, Vitaly Kuznetsov wrote:
> > Jim Mattson writes:
> >
> >> On Thu, Jun 20, 2019 at 4:02 AM Vitaly Kuznetsov
> >> wrote:
> >>>
> >>> Regardless of the way
On Wed, Jul 31, 2019 at 4:37 PM Sean Christopherson
wrote:
> At a glance, the full emulator models behavior correctly, e.g. see
> toggle_interruptibility() and setters of ctxt->interruptibility.
>
> I'm pretty sure that leaves the EPT misconfig MMIO and APIC access EOI
> fast paths as the only (V
On Wed, Jul 31, 2019 at 5:13 PM Paolo Bonzini wrote:
>
> On 01/08/19 01:56, Sean Christopherson wrote:
> > On Wed, Jul 31, 2019 at 04:45:21PM -0700, Jim Mattson wrote:
> >> On Wed, Jul 31, 2019 at 4:37 PM Sean Christopherson
> >> wrote:
> >>
> >>
On Mon, Jan 7, 2019 at 11:48 PM Wei Wang wrote:
>
> On 01/08/2019 02:48 AM, Jim Mattson wrote:
> > On Mon, Jan 7, 2019 at 10:20 AM Andi Kleen wrote:
> >>> The issue is compatibility. Prior to your change, reading this MSR
> >>> from a VM would raise #GP. Aft
On Tue, Feb 12, 2019 at 6:16 AM Paolo Bonzini wrote:
>
> On 07/02/19 22:17, Jim Mattson wrote:
> >> SDM says MSR_IA32_VMX_PROCBASED_CTLS2 is only available "If
> >> (CPUID.01H:ECX.[5] && IA32_VMX_PROCBASED_CTLS[63])". It was found that
> >>
On Mon, Mar 25, 2019 at 10:17 AM Borislav Petkov wrote:
>
> From: Borislav Petkov
>
> This is an AMD-specific MSR. Put it where it belongs.
>
> Signed-off-by: Borislav Petkov
> Tested-by: Yazen Ghannam
> ---
> arch/x86/kvm/svm.c | 14 ++
> arch/x86/kvm/x86.c | 12
> 2
On Thu, Sep 13, 2018 at 10:05 AM, Vitaly Kuznetsov wrote:
> It is perfectly valid for a guest to do VMXON and not do VMPTRLD. This
> state needs to be preserved on migration.
>
> Signed-off-by: Vitaly Kuznetsov
> ---
> arch/x86/kvm/vmx.c | 15 ---
> 1 file changed, 8 insertions(+), 7
On Fri, Sep 14, 2018 at 12:49 AM, Vitaly Kuznetsov wrote:
> Jim Mattson writes:
>
>> On Thu, Sep 13, 2018 at 10:05 AM, Vitaly Kuznetsov
>> wrote:
>>> It is perfectly valid for a guest to do VMXON and not do VMPTRLD. This
>>> state needs to be preserved
On Fri, Apr 13, 2018 at 4:23 AM, Paolo Bonzini wrote:
> From: KarimAllah Ahmed
>
> Update 'tsc_offset' on vmenty/vmexit of L2 guests to ensure that it always
> captures the TSC_OFFSET of the running guest whether it is the L1 or L2
> guest.
>
> Cc: Jim Mattson
&
On Mon, Apr 16, 2018 at 4:04 AM, Paolo Bonzini wrote:
> On 14/04/2018 05:10, KarimAllah Ahmed wrote:
>> Update 'tsc_offset' on vmentry/vmexit of L2 guests to ensure that it always
>> captures the TSC_OFFSET of the running guest whether it is the L1 or L2
>> gues
On Thu, Apr 12, 2018 at 8:12 AM, KarimAllah Ahmed wrote:
> v2 -> v3:
> - Remove the forced VMExit from L2 after reading the kvm_state. The actual
> problem is solved.
> - Rebase again!
> - Set nested_run_pending during restore (not sure if it makes sense yet or
> not).
This doesn't actually
32'.
>>
>> That gives us a bit more room again for arch-specific requests as we
>> already ran out of space for x86 due to the hard-coded check.
>>
>> Cc: Paolo Bonzini
>> Cc: Radim Krčmář
>> Cc: k...@vger.kernel.org
>> Cc: linux-kernel@vger.kernel.org
>> Signed-off-by: KarimAllah Ahmed
I like it!
Reviewed-by: Jim Mattson
Moreover, if the VMLAUNCH/VMRESUME is in 32-bit PAE mode, then the
PDPTRs should not be reloaded.
On Mon, Feb 5, 2018 at 10:44 AM, Jim Mattson wrote:
> I realize now that this fix isn't quite right, since it loads
> vmcs12->host_cr3 rather than reverting to the CR3 that was loaded
Should the subject read: "KVM: x86: restore CS after all far jump failures"?
On Tue, Nov 22, 2016 at 11:21 AM, Radim Krčmář wrote:
> em_jmp_far and em_ret_far assumed that setting IP can only fail in 64
> bit mode, but syzkaller proved otherwise (and SDM agrees).
> Code segment was restored upon
Is cpu_has_vmx_invvpid() sufficient? This indicates support for the
INVVPID instruction, but not necessarily any of the desired INVVPID
types. KVM's vpid_sync_context() assumes that at least one of
{VMX_VPID_EXTENT_SINGLE_CONTEXT, VMX_VPID_EXTENT_ALL_CONTEXT} is
supported.
On Wed, Mar 22, 2017 at
>
> So we should check both VPID enable bit in vmx exec control and INVVPID
> support bit
> in vmx capability MSRs to enable VPID. This patch adds the guarantee to not
> enable
> VPID if either INVVPID or single-context/all-context invalidation is not
> exposed in
> vmx capabi
I believe this behavior would be documented in the chipset data sheet
rather than the SDM, since the chipset returns all 1s for an unclaimed
read.
Reviewed-by: Jim Mattson
On Tue, Mar 7, 2017 at 8:51 AM, Radim Krčmář wrote:
> Before trying to do nested_get_page() in nested_vmx_merge_msr_bit
On Thu, Mar 9, 2017 at 2:29 PM, Michael S. Tsirkin wrote:
> Some guests call mwait without checking the cpu flags. We currently
"Some guests"? What guests other than Mac OS X are so ill-behaved?
> emulate that as a NOP but on VMX we can do better: let guest stop the
> CPU until timer or IPI. C
I would expect that any reasonable CPU will support "RDRAND exiting"
iff it supports RDRAND (i.e. CPUID.01H:ECX.RDRAND[bit 30]).
Similarly, any reasonable CPU will support "RDSEED exiting" iff it
supports RDSEED (i.e. CPUID.(EAX=07H, ECX=0H):EBX.RDSEED[bit 18]).
Shouldn't there be some code in vmx
, u32 exit_reason,
>> */
>> static void vmx_leave_nested(struct kvm_vcpu *vcpu)
>> {
>> - if (is_guest_mode(vcpu))
>> + if (is_guest_mode(vcpu)) {
>> + to_vmx(vcpu)->nested.nested_run_pending = 0;
>> nested_vmx_vmexit(vcpu, -1, 0, 0);
>> + }
>> free_nested(to_vmx(vcpu));
>> }
>>
>>
>
> Reviewed-by: David Hildenbrand
>
> --
> Thanks,
>
> David
This seems reasonable to me, and it should fix the issue exposed by
syzkaller--though I was never able to reproduce it.
Reviewed-by: Jim Mattson
I believe this happens when the VMCS12 MSR bitmap address is valid,
but no device is configured to respond to the bus request. I agree
that the warning should be removed. However, in this case, the VMCS12
MSR bitmap should read as all 1s. The same is true everywhere that
nested_get_page returns NUL
till there and uses it to compute 'EAX' for 'cpuid'.
That smells like VMware's hypercall madness!
> smm_test can't fully use standard ucall() framework as we need to
> write a very simple SMI handler there. Fix the immediate issue by
> making RAX input/output o
On Thu, Jun 11, 2020 at 2:48 PM Babu Moger wrote:
>
> The following intercept is added for INVPCID instruction:
> CodeNameCause
> A2h VMEXIT_INVPCID INVPCID instruction
>
> The following bit is added to the VMCB layout control area
> to control intercept of INVPCID:
> Byte Off
On Thu, Jun 11, 2020 at 2:48 PM Babu Moger wrote:
>
> INVPCID instruction handling is mostly same across both VMX and
> SVM. So, move the code to common x86.c.
>
> Signed-off-by: Babu Moger
> ---
> arch/x86/kvm/vmx/vmx.c | 78 +-
> arch/x86/kvm/x86.c
On Fri, Jun 12, 2020 at 12:35 PM Babu Moger wrote:
>
>
>
> > -Original Message-----
> > From: Jim Mattson
> > Sent: Thursday, June 11, 2020 6:51 PM
> > To: Moger, Babu
> > Cc: Wanpeng Li ; Joerg Roedel ;
> > the arch/x86 maintainers ; Sean Ch
On Fri, Jun 12, 2020 at 2:47 PM Babu Moger wrote:
>
>
>
> On 6/12/20 3:10 PM, Jim Mattson wrote:
> > On Fri, Jun 12, 2020 at 12:35 PM Babu Moger wrote:
> >>
> >>
> >>
> >>> -Original Message-
> >>> From: Jim Matts
; what is now known as 'flags'.
>
> Suggested-by: Sean Christopherson
> Fixes: 850448f35aaf ("KVM: nVMX: Fix VMX preemption timer migration")
> Fixes: 83d31e5271ac ("KVM: nVMX: fixes for preemption timer migration")
> Signed-off-by: Vitaly Kuznetsov
Oops!
Reviewed-by: Jim Mattson
On Wed, Aug 12, 2020 at 10:51 AM Sean Christopherson
wrote:
>
> On successful nested VM-Enter, check for pending interrupts and convert
> the highest priority interrupt to a pending posted interrupt if it
> matches L2's notification vector. If the vCPU receives a notification
> interrupt before n
: "respectively"
>
> Signed-off-by: Babu Moger
Reviewed-by: Jim Mattson
On Wed, Aug 26, 2020 at 12:14 PM Babu Moger wrote:
>
> Change intercept_cr to generic intercepts in vmcb_control_area.
> Use the new vmcb_set_intercept, vmcb_clr_intercept and vmcb_is_intercept
> where applicable.
>
> Signed-off-by: Babu Moger
Reviewed-by: Jim Mattson
On Wed, Aug 26, 2020 at 12:14 PM Babu Moger wrote:
>
> Modify intercept_dr to generic intercepts in vmcb_control_area. Use
> the generic vmcb_set_intercept, vmcb_clr_intercept and vmcb_is_intercept
> to set/clear/test the intercept_dr bits.
>
> Signed-off-by: Babu Moger
Reviewed-by: Jim Mattson
off-by: Babu Moger
> Reviewed-by: Jim Mattson
> ---
> @@ -835,7 +832,7 @@ static bool nested_exit_on_exception(struct vcpu_svm *svm)
> {
> unsigned int nr = svm->vcpu.arch.exception.nr;
>
> - return (svm->nested.ctl.intercept_exceptions & (1 <
On Wed, Aug 26, 2020 at 12:15 PM Babu Moger wrote:
>
> Handling of kvm_read/write_guest_virt*() errors can be moved to common
> code. The same code can be used by both VMX and SVM.
>
> Signed-off-by: Babu Moger
Reviewed-by: Jim Mattson
On Thu, Sep 10, 2020 at 2:51 AM Wanpeng Li wrote:
>
> From: Wanpeng Li
>
> According to SDM 27.2.4, Event delivery causes an APIC-access VM exit.
> Don't report internal error and freeze guest when event delivery causes
> an APIC-access exit, it is handleable and the event will be re-injected
> d
On Fri, Jul 10, 2020 at 8:48 AM Mohammed Gamal wrote:
>
> Check guest physical address against it's maximum physical memory. If
> the guest's physical address exceeds the maximum (i.e. has reserved bits
> set), inject a guest page fault with PFERR_RSVD_MASK set.
>
> This has to be done both in the
on Lewis
> Signed-off-by: Alexander Graf
Reviewed-by: Jim Mattson
On Fri, May 8, 2020 at 2:10 PM Babu Moger wrote:
>
> PKU feature is supported by both VMX and SVM. So we can
> safely move pkru state save/restore to common code.
> Also move all the pkru data structure to kvm_vcpu_arch.
>
> Signed-off-by: Babu Moger
> ---
> arch/x86/include/asm/kvm_host.h |
On Tue, May 5, 2020 at 2:18 AM Emanuele Giuseppe Esposito
wrote:
>
>
>
> On 5/4/20 11:37 PM, David Rientjes wrote:
> > Since this is becoming a generic API (good!!), maybe we can discuss
> > possible ways to optimize gathering of stats in mass?
>
> Sure, the idea of a binary format was considered
On Thu, Jun 4, 2020 at 9:20 AM Paolo Bonzini wrote:
>
> On 04/06/20 17:16, Sean Christopherson wrote:
> > On Thu, Jun 04, 2020 at 09:37:59AM +0800, Xu, Like wrote:
> >> On 2020/6/4 4:33, Sean Christopherson wrote:
> >>> Unconditionally return true when querying the validity of
> >>> MSR_IA32_PERF_
On Thu, Jun 4, 2020 at 9:43 AM Vitaly Kuznetsov wrote:
>
> Sean Christopherson writes:
>
> > On Thu, Jun 04, 2020 at 05:33:25PM +0200, Vitaly Kuznetsov wrote:
> >> Sean Christopherson writes:
> >>
> >> > On Thu, Jun 04, 2020 at 04:40:52PM +0200, Paolo Bonzini wrote:
> >> >> On 04/06/20 16:31, Vi
On Thu, Jun 4, 2020 at 12:09 PM Nakajima, Jun wrote:
> We (Intel virtualization team) are also working on a similar thing,
> prototyping to meet such requirements, i..e "some level of confidentiality to
> guests”. Linux/KVM is the host, and the Kirill’s patches are helpful when
> removing the
On Thu, Jun 4, 2020 at 11:35 PM Paolo Bonzini wrote:
>
> On 05/06/20 07:00, Xiaoyao Li wrote:
> > you could do
> >
> > bool guest_cpuid_aperfmperf = false;
> > if (best)
> > guest_cpuid_aperfmperf = !!(best->ecx & BIT(0));
> >
> > if (guest_cpuid_aperfmerf != guest_has_aperfmpe
201 - 300 of 448 matches
Mail list logo