On Mon, Jul 13, 2020 at 11:23 AM Jim Mattson wrote:
>
> On Mon, Jul 13, 2020 at 9:22 AM Vitaly Kuznetsov wrote:
> >
> > Before commit 850448f35aaf ("KVM: nVMX: Fix VMX preemption timer
> > migration") struct kvm_vmx_nested_state_hdr looked like:
>
On Fri, Aug 28, 2020 at 1:54 AM Chenyi Qiang wrote:
>
> KVM supports the nested VM_{EXIT, ENTRY}_LOAD_IA32_PERF_GLOBAL_CTRL and
> VM_{ENTRY_LOAD, EXIT_CLEAR}_BNDCFGS, but they doesn't expose during
> the setup of nested VMX controls MSR.
>
Aren't these features added conditionally in
nested_vmx_e
On Fri, Aug 28, 2020 at 1:54 AM Chenyi Qiang wrote:
>
> When setting the nested VMX MSRs, verify it with the values in
> vmcs_config.nested_vmx_msrs, which reflects the global capability of
> VMX controls MSRs.
>
> Signed-off-by: Chenyi Qiang
You seem to have entirely missed the point of this co
by: Chenyi Qiang
> Reviewed-by: Xiaoyao Li
Reviewed-by: Jim Mattson
On Fri, Aug 28, 2020 at 1:54 AM Chenyi Qiang wrote:
>
> Update the fields (i.e. VM_{ENTRY_LOAD, EXIT_CLEAR}_BNDCFGS and
> VM_{ENTRY, EXIT}_LOAD_IA32_PERF_GLOBAL_CTRL) in
> nested MSR_IA32_VMX_TRUE_{ENTRY, EXIT}_CTLS according to guest CPUID
> when user space initializes the features MSRs. Regardle
On Wed, Sep 2, 2020 at 11:16 AM Sean Christopherson
wrote:
>
> On Fri, Aug 28, 2020 at 01:39:39PM -0700, Jim Mattson wrote:
> > On Fri, Aug 28, 2020 at 1:54 AM Chenyi Qiang wrote:
> > >
> > > Update the fields (i.e. VM_{ENTRY_LOAD, EXIT_CLEAR}_BND
On Thu, Sep 3, 2020 at 7:12 AM Mohammed Gamal wrote:
>
> This patch exposes allow_smaller_maxphyaddr to the user as a module parameter.
>
> Since smaller physical address spaces are only supported on VMX, the parameter
> is only exposed in the kvm_intel module.
> Modifications to VMX page fault an
On Thu, Sep 3, 2020 at 11:03 AM Paolo Bonzini wrote:
>
> On 03/09/20 19:57, Jim Mattson wrote:
> > On Thu, Sep 3, 2020 at 7:12 AM Mohammed Gamal wrote:
> >> This patch exposes allow_smaller_maxphyaddr to the user as a module
> >> parameter.
> >>
> &g
On Thu, Sep 3, 2020 at 1:02 PM Paolo Bonzini wrote:
>
> On 03/09/20 20:32, Jim Mattson wrote:
> >> [Checking writes to CR3] would be way too slow. Even the current
> >> trapping of present #PF can introduce some slowdown depending on the
> >> workload.
> &
On Fri, Aug 21, 2020 at 8:40 PM Sean Christopherson
wrote:
>
> On Thu, Aug 20, 2020 at 01:08:22PM -0700, Jim Mattson wrote:
> > On Wed, Apr 1, 2020 at 1:13 AM Vitaly Kuznetsov wrote:
> > > ---
> > > arch/x86/kvm/vmx/vmx.c | 12 +++-
> > > 1 fil
On Mon, Aug 24, 2020 at 11:57 AM Jim Mattson wrote:
>
> On Fri, Aug 21, 2020 at 8:40 PM Sean Christopherson
> wrote:
> >
> > On Thu, Aug 20, 2020 at 01:08:22PM -0700, Jim Mattson wrote:
> > > On Wed, Apr 1, 2020 at 1:13 AM Vitaly Kuznetsov
> > > wrote
On Fri, Aug 28, 2020 at 7:51 PM Xiaoyao Li wrote:
>
> On 8/29/2020 9:49 AM, Chenyi Qiang wrote:
> >
> >
> > On 8/29/2020 1:43 AM, Jim Mattson wrote:
> >> On Fri, Aug 28, 2020 at 1:54 AM Chenyi Qiang
> >> wrote:
> >>>
> >>> KVM s
On Tue, Aug 4, 2020 at 11:41 AM Sean Christopherson
wrote:
> Ping. This really needs to be in the initial pull for 5.9, as is kvm/queue
> has a 100% fatality rate for me.
I agree completely, but I am curious what guest you have that toggles
CD/NW in 64-bit mode.
On Wed, Aug 12, 2020 at 10:42 PM Chenyi Qiang wrote:
>
>
>
> On 8/13/2020 5:21 AM, Jim Mattson wrote:
> > On Fri, Aug 7, 2020 at 1:46 AM Chenyi Qiang wrote:
> >>
> >> Protection Keys for Supervisor Pages (PKS) uses IA32_PKRS MSR (PKRS) at
> >> index
On Wed, Aug 12, 2020 at 9:54 PM Chenyi Qiang wrote:
>
>
>
> On 8/11/2020 8:05 AM, Jim Mattson wrote:
> > On Fri, Aug 7, 2020 at 1:47 AM Chenyi Qiang wrote:
> >>
> >> PKS MSR passes through guest directly. Configure the MSR to match the
> >> L0/L1
On Fri, Aug 7, 2020 at 1:47 AM Chenyi Qiang wrote:
>
> Existence of PKS is enumerated via CPUID.(EAX=7H,ECX=0):ECX[31]. It is
> enabled by setting CR4.PKS when long mode is active. PKS is only
> implemented when EPT is enabled and requires the support of VM_{ENTRY,
> EXIT}_LOAD_IA32_PKRS currently
On Fri, Aug 14, 2020 at 3:09 AM Chenyi Qiang wrote:
>
>
>
> On 8/14/2020 1:52 AM, Jim Mattson wrote:
> > On Wed, Aug 12, 2020 at 9:54 PM Chenyi Qiang wrote:
> >>
> >>
> >>
> >> On 8/11/2020 8:05 AM, Jim Mattson wrote:
> >
On Fri, Oct 9, 2020 at 9:17 AM Jim Mattson wrote:
>
> On Fri, Jul 10, 2020 at 8:48 AM Mohammed Gamal wrote:
> >
> > Check guest physical address against it's maximum physical memory. If
> > the guest's physical address exceeds the maximum (i.e. has reserved
> > hardware and kvm in its default configuration.
> >
> > A well-behaved userspace should not set the bit if it is not supported.
> >
> > Suggested-by: Jim Mattson
> > Signed-off-by: Wanpeng Li
>
> It's common for userspace to copy all supported CPUID bits to
On Thu, Oct 22, 2020 at 9:37 AM Paolo Bonzini wrote:
>
> On 22/10/20 18:35, Jim Mattson wrote:
> > On Thu, Oct 22, 2020 at 6:02 AM Paolo Bonzini wrote:
> >>
> >> On 22/10/20 03:34, Wanpeng Li wrote:
> >>> From: Wanpeng Li
> >>>
On Fri, Oct 23, 2020 at 2:22 AM Paolo Bonzini wrote:
>
> On 23/10/20 05:14, Sean Christopherson wrote:
> +
> + /*
> +* Check that the GPA doesn't exceed physical memory limits, as
> that is
> +* a guest page fault. We have to emulate the instruction
On Fri, Oct 23, 2020 at 2:07 AM Paolo Bonzini wrote:
>
> On 22/10/20 19:13, Jim Mattson wrote:
> > We don't actually use KVM_GET_SUPPORTED_CPUID at all today. If it's
> > commonly being misinterpreted as you say, perhaps we should add a
> > KVM_GET_TRUE_SUPPORT
On Fri, Oct 23, 2020 at 10:16 AM Paolo Bonzini wrote:
>
> On 23/10/20 18:59, Jim Mattson wrote:
> >> The problem is that page fault error code bits cannot be reconstructed
> >> from bits 0..2 of the EPT violation exit qualification, if bit 8 is
> >> clear in th
On Wed, Jul 8, 2020 at 4:04 AM Paolo Bonzini wrote:
>
> CR4.VMXE is reserved unless the VMX CPUID bit is set. On Intel,
> it is also tested by vmx_set_cr4, but AMD relies on kvm_valid_cr4,
> so fix it.
>
> Signed-off-by: Paolo Bonzini
Reviewed-by: Jim Mattson
On Thu, Jul 9, 2020 at 2:55 AM Paolo Bonzini wrote:
>
> AMD doesn't specify (unlike Intel) that EFER.LME, CR0.PG and
> EFER.LMA must be consistent, and for SMM state restore they say that
> "The EFER.LMA register bit is set to the value obtained by logically
> ANDing the SMRAM values of EFER.LME,
; vmx->nested.preemption_timer_deadline =
> kvm_state->hdr.vmx.preemption_timer_deadline;
> - }
> + } else
> + vmx->nested.has_preemption_timer_deadline = false;
Doesn't the coding standard require braces around the else clause?
Reviewed-by: Jim Mattson
On Thu, Jul 9, 2020 at 10:25 AM Paolo Bonzini wrote:
>
> On 09/07/20 19:12, Jim Mattson wrote:
> >> +
> >> + /* The processor ignores EFER.LMA, but svm_set_efer needs it. */
> >> + efer &= ~EFER_LMA;
> >> +
On Thu, Jul 9, 2020 at 11:31 AM Paolo Bonzini wrote:
>
> On 09/07/20 20:28, Jim Mattson wrote:
> >> That said, the VMCB here is guest memory and it can change under our
> >> feet between nested_vmcb_checks and nested_prepare_vmcb_save. Copying
> >> the wh
t; change; without this patch, instead, CR4 would be checked against the
> previous value for L2 on vmentry, and against the previous value for
> L1 on vmexit, and CPUID would not be updated.
>
> Signed-off-by: Paolo Bonzini
Reviewed-by: Jim Mattson
On Mon, Nov 9, 2020 at 2:09 PM Luck, Tony wrote:
>
> What does KVM do with model specific MSRs?
"Model specific model-specific registers?" :-)
KVM only implements a small subset of MSRs. By default, any access to
the rest raises #GP.
> Looks like you let the guest believe it was running on one
On Mon, Nov 9, 2020 at 2:57 PM Luck, Tony wrote:
>
> > I thought Linux had long ago gone the route of turning rdmsr/wrmsr
> > into rdmsr_safe/wrmsr_safe, so that the guest would ignore the #GPs on
> > writes and return zero to the caller for #GPs on reads.
>
> Linux just switched that around for t
On Sun, Nov 1, 2020 at 10:14 PM Tao Xu wrote:
>
> There are some cases that malicious virtual machines can cause CPU stuck
> (event windows don't open up), e.g., infinite loop in microcode when
> nested #AC (CVE-2015-5307). No event window obviously means no events,
> e.g. NMIs, SMIs, and IRQs wil
; already enabled. Trying to shave a few cycles to make the PDPTR path an
> "else if" case is a mess.
>
> Fixes: d42e3fae6faed ("kvm: x86: Read PDPTEs on CR0.CD and CR0.NW changes")
> Cc: Jim Mattson
> Cc: Oliver Upton
> Cc: Peter Shier
> Signed-off-by: Sean Christopherson
Reviewed-by: Jim Mattson
On Tue, Jul 14, 2020 at 11:59 AM Sean Christopherson
wrote:
>
> On Tue, Jul 14, 2020 at 11:55:45AM -0700, Jim Mattson wrote:
> > On Mon, Jul 13, 2020 at 6:57 PM Sean Christopherson
> > wrote:
> > >
> > > Don't attempt to load PDPTRs if EFER.LME=1, i.e. i
On Tue, Jul 28, 2020 at 5:41 AM Alexander Graf wrote:
>
>
>
> On 28.07.20 10:15, Vitaly Kuznetsov wrote:
> >
> > Alexander Graf writes:
> >
> >> MSRs are weird. Some of them are normal control registers, such as EFER.
> >> Some however are registers that really are model specific, not very
> >> i
ed-off-by: Babu Moger
Sean will probably complain about introducing unused functions, but...
Reviewed-by: Jim Mattson
On Tue, Jul 28, 2020 at 4:38 PM Babu Moger wrote:
>
> Change intercept_cr to generic intercepts in vmcb_control_area.
> Use the new __set_intercept, __clr_intercept and __is_intercept
> where applicable.
>
> Signed-off-by: Babu Moger
> ---
> arch/x86/include/asm/svm.h | 42
On Tue, Jul 28, 2020 at 4:38 PM Babu Moger wrote:
>
> Modify intercept_dr to generic intercepts in vmcb_control_area.
> Use generic __set_intercept, __clr_intercept and __is_intercept
> to set/clear/test the intercept_dr bits.
>
> Signed-off-by: Babu Moger
> ---
> arch/x86/include/asm/svm.h |
On Tue, Jul 28, 2020 at 4:38 PM Babu Moger wrote:
>
> Remove set_cr_intercept, clr_cr_intercept and is_cr_intercept. Instead
> call generic set_intercept, clr_intercept and is_intercept for all
> cr intercepts.
>
> Signed-off-by: Babu Moger
Reviewed-by: Jim Mattson
On Tue, Jul 28, 2020 at 4:38 PM Babu Moger wrote:
>
> host_intercept_exceptions is not used anywhere. Clean it up.
>
> Signed-off-by: Babu Moger
Reviewed-by: Jim Mattson
On Tue, Jul 28, 2020 at 4:37 PM Babu Moger wrote:
>
> The following series adds the support for PCID/INVPCID on AMD guests.
> While doing it re-structured the vmcb_control_area data structure to
> combine all the intercept vectors into one 32 bit array. Makes it easy
> for future additions.
>
> IN
unction.
>
> While at it, remove an unnecessary assignment in the SVM version,
> which is already done in the caller (kvm_arch_vcpu_ioctl_set_guest_debug)
> and has nothing to do with the exception bitmap.
>
> Signed-off-by: Paolo Bonzini
Reviewed-by: Jim Mattson
On Fri, Jul 10, 2020 at 8:48 AM Mohammed Gamal wrote:
>
> From: Paolo Bonzini
>
> Allow vendor code to observe changes to MAXPHYADDR and start/stop
> intercepting page faults.
>
> Signed-off-by: Paolo Bonzini
Reviewed-by: Jim Mattson
On Fri, Jul 10, 2020 at 8:48 AM Mohammed Gamal wrote:
>
> When EPT is enabled, KVM does not really look at guest physical
> address size. Address bits above maximum physical memory size are reserved.
> Because KVM does not look at these guest physical addresses, it currently
> effectively supports
On Fri, Jul 10, 2020 at 10:06 AM Paolo Bonzini wrote:
>
> On 10/07/20 18:30, Jim Mattson wrote:
> >>
> >> This can be problem when having a mixed setup of machines with 5-level page
> >> tables and machines with 4-level page tables, as live migration can chang
On Fri, Jul 10, 2020 at 10:16 AM Paolo Bonzini wrote:
>
> On 10/07/20 19:13, Jim Mattson wrote:
> > On Fri, Jul 10, 2020 at 10:06 AM Paolo Bonzini wrote:
> >>
> >> On 10/07/20 18:30, Jim Mattson wrote:
> >>>>
> >>>> This can b
On Thu, Sep 24, 2020 at 11:42 AM Tom Lendacky wrote:
>
> From: Tom Lendacky
>
> This series updates the INVD intercept support for both SVM and VMX to
> skip the instruction rather than emulating it, since emulation of this
> instruction is just a NOP.
Isn't INVD a serializing instruction, where
s that triggers PDPTR loads also being used to trigger MMU context
> resets.
>
> Fixes: 427890aff855 ("kvm: x86: Toggling CR4.SMAP does not load PDPTEs in PAE
> mode")
> Fixes: cb957adb4ea4 ("kvm: x86: Toggling CR4.PKE does not load PDPTEs in PAE
> mode")
>
>
> Signed-off-by: Sean Christopherson
Reviewed-by: Jim Mattson
On Wed, Sep 23, 2020 at 9:51 AM Sean Christopherson
wrote:
>
> If PCID is not exposed to the guest, clear INVPCID in the guest's CPUID
> even if the VMCS INVPCID enable is not supported. This will allow
> consolidating the secondary execution control adjustment code without
> having to special ca
ad of binning it into vmx_complete_atomic_exit().
> Doing so allows vmx_vcpu_run() to handle VMX_EXIT_REASONS_FAILED_VMENTRY
> in a sane fashion and also simplifies vmx_complete_atomic_exit() since
> VMCS.VM_EXIT_INTR_INFO is guaranteed to be fresh.
>
> Fixes: b060ca3b2e9e7 ("kvm: vmx: Handle V
pteron G3s had it already) and the change should have zero affect.
>
> Remove manual svm->next_rip setting with hard-coded instruction lengths. The
> only case where we now use svm->next_rip is EXIT_IOIO: the instruction
> length is provided to us by hardware.
>
> Repor
MOV-to-SS/STI. Is that enforced anywhere?
Reviewed-by: Jim Mattson
in the same behavior.
>
> As we're not supposed to see these messages under normal conditions, switch
> to pr_err_once().
>
> Signed-off-by: Vitaly Kuznetsov
Reviewed-by: Jim Mattson
On Thu, Jun 20, 2019 at 4:02 AM Vitaly Kuznetsov wrote:
>
> svm->next_rip is only used by skip_emulated_instruction() and in case
> kvm_set_msr() fails we rightfully don't do that. Move svm->next_rip
> advancement to 'else' branch to avoid creating false impression that
> it's always advanced.
>
>
On Wed, Jun 17, 2020 at 4:38 AM Vitaly Kuznetsov wrote:
> Side note: MSR_IA32_PERF_CAPABILITIES can be returned by both
> KVM_GET_MSR_INDEX_LIST and KVM_GET_MSR_FEATURE_INDEX_LIST as we have it
> both as an emulated MSR filtered by kvm_x86_ops.has_emulated_msr() and
> a feature msr filtered by kv
kru;
> > -
> > unsigned long host_debugctlmsr;
> >
> > /*
>
> (Is there a better [automated] way to figure out whether the particular
> field is being used or not than just dropping it and trying to compile
> the whole thing? Leaving #define-s, configs,... aside ...)
>
> Reviewed-by: Vitaly Kuznetsov
Reviewed-by: Jim Mattson
On Wed, Jun 17, 2020 at 11:11 AM Babu Moger wrote:
>
> Jim,
>
> > -Original Message-
> > From: kvm-ow...@vger.kernel.org On Behalf
> > Of Babu Moger
> > Sent: Wednesday, June 17, 2020 9:31 AM
> > To: Jim Mattson
> > Cc: Wanpeng Li ; Joerg
te struct")
> Signed-off-by: Sean Christopherson
Reviewed-by: Jim Mattson
On Mon, May 11, 2020 at 4:33 PM Babu Moger wrote:
>
> Both Intel and AMD support (MPK) Memory Protection Key feature.
> Move the feature detection from VMX to the common code. It should
> work for both the platforms now.
>
> Signed-off-by: Babu Moger
> ---
> arch/x86/kvm/cpuid.c |4 +++-
>
On Mon, May 11, 2020 at 4:33 PM Babu Moger wrote:
>
> MPK feature is supported by both VMX and SVM. So we can
> safely move pkru state save/restore to common code. Also
> move all the pkru data structure to kvm_vcpu_arch.
>
> Also fixes the problem Jim Mattson pointed a
On Tue, May 12, 2020 at 8:12 AM Babu Moger wrote:
>
>
>
> On 5/11/20 6:51 PM, Jim Mattson wrote:
> > On Mon, May 11, 2020 at 4:33 PM Babu Moger wrote:
> >>
> >> Both Intel and AMD support (MPK) Memory Protection Key feature.
> >> Move the feature det
On Tue, Jun 16, 2020 at 9:14 AM Vitaly Kuznetsov wrote:
>
> state_test/smm_test selftests are failing on AMD with:
> "Unexpected result from KVM_GET_MSRS, r: 51 (failed MSR was 0x345)"
>
> MSR_IA32_PERF_CAPABILITIES is an emulated MSR indeed but only on Intel,
> make svm_has_emulated_msr() skip it
out the hard way, this breaks ignore_msrs.
>
> Reviewed-by: Sean Christopherson
Excellent find!
Reviewed-by: Jim Mattson
On Tue, Jun 16, 2020 at 9:45 AM Vitaly Kuznetsov wrote:
>
> Jim Mattson writes:
>
> > On Tue, Jun 16, 2020 at 9:14 AM Vitaly Kuznetsov
> > wrote:
> >>
> >> state_test/smm_test selftests are failing on AMD with:
> >> "Unexpected
On Tue, Jun 16, 2020 at 3:03 PM Babu Moger wrote:
>
> The new intercept bits have been added in vmcb control
> area to support the interception of INVPCID instruction.
>
> The following bit is added to the VMCB layout control area
> to control intercept of INVPCID:
>
> Byte Offset Bit(s)
eb &= ~(1u << PF_VECTOR);
>
> /* When we are running a nested L2 guest and L1 specified for it a
> diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
> index 8a83b5edc820..5e2da15fe94f 100644
> --- a/arch/x86/kvm/vmx/vmx.h
> +++ b/arch/x86/kvm/vmx/vmx.h
> @@ -552,6 +552,11 @@ static inline bool vmx_has_waitpkg(struct vcpu_vmx *vmx)
> SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE;
> }
>
> +static inline bool vmx_need_pf_intercept(struct kvm_vcpu *vcpu)
> +{
> + return !enable_ept;
> +}
> +
> void dump_vmcs(void);
>
> #endif /* __KVM_X86_VMX_H */
> --
> 2.26.2
>
Reviewed-by: Jim Mattson
ff-by: Sean Christopherson
Reviewed-by: Jim Mattson
On Tue, Jun 23, 2020 at 4:58 AM Xiaoyao Li wrote:
>
> It needs to invalidate CPUID configruations if usersapce provides
Nits: configurations, userspace
> illegal input.
>
> Signed-off-by: Xiaoyao Li
> ---
> arch/x86/kvm/cpuid.c | 4
> 1 file changed, 4 insertions(+)
>
> diff --git a/arch/
On Tue, Jun 23, 2020 at 11:29 AM Sean Christopherson
wrote:
>
> On Tue, Jun 23, 2020 at 02:35:30PM +0800, Like Xu wrote:
> > The aperf/mperf are used to report current CPU frequency after 7d5905dc14a
> > "x86 / CPU: Always show current CPU frequency in /proc/cpuinfo". But guest
> > kernel always r
On Tue, Jun 23, 2020 at 12:05 PM Sean Christopherson
wrote:
>
> On Tue, Jun 23, 2020 at 11:39:16AM -0700, Jim Mattson wrote:
> > On Tue, Jun 23, 2020 at 11:29 AM Sean Christopherson
> > wrote:
> > >
> > > On Tue, Jun 23, 2020 at 02:35:30PM +0800, Like Xu wro
On Tue, Mar 28, 2017 at 7:28 AM, Radim Krčmář wrote:
> 2017-03-27 15:34+0200, Alexander Graf:
>> On 15/03/2017 22:22, Michael S. Tsirkin wrote:
>>> Guests running Mac OS 5, 6, and 7 (Leopard through Lion) have a problem:
>>> unless explicitly provided with kernel command line argument
>>> "idlehal
This might be more useful if it could be dynamically toggled on and
off, depending on system load.
On Tue, Apr 11, 2017 at 4:45 AM, Alexander Graf wrote:
> From: "Michael S. Tsirkin"
>
> Guests that are heavy on futexes end up IPI'ing each other a lot. That
> can lead to significant slowdowns an
gned-off-by: Paolo Bonzini
The check can never be true because the SDM says so explicitly: Bit 8
is "Reserved if bit 7 is 0 (cleared to 0)."
Reviewed-by: Jim Mattson
> ---
> arch/x86/kvm/vmx.c | 14 --
> 1 file changed, 14 deletions(-)
>
> diff --git a/arch/x86/
On Thu, Mar 30, 2017 at 2:55 AM, Paolo Bonzini wrote:
> Signed-off-by: Paolo Bonzini
Reviewed-by: Jim Mattson
> ---
> arch/x86/include/asm/vmx.h | 2 ++
> arch/x86/kvm/vmx.c | 5 +
> 2 files changed, 7 insertions(+)
>
> diff --git a/arch/x86/include/asm/vmx.h b
1, 2017 at 11:23 AM, Alexander Graf wrote:
>
>
>> Am 11.04.2017 um 19:10 schrieb Jim Mattson :
>>
>> This might be more useful if it could be dynamically toggled on and
>> off, depending on system load.
>
> What would trapping mwait (currently) buy you?
>
>
On Wed, Apr 12, 2017 at 7:54 AM, Alexander Graf wrote:
>
>
> On 12.04.17 16:34, Jim Mattson wrote:
>>
>> Actually, we have rejected commit 87c00572ba05aa8c ("kvm: x86: emulate
>> monitor and mwait instructions as nop"), so when we intercept
>> MONITOR/M
Isn't McAfee DeepSAFE defunct? Are there any other consumers of EPTP switching?
On Thu, Jun 29, 2017 at 4:29 PM, Bandan Das wrote:
> These patches expose eptp switching/vmfunc to the nested hypervisor. Testing
> with
> kvm-unit-tests seems to work ok.
>
> If the guest hypervisor enables vmfunc/e
On Tue, Jul 11, 2017 at 10:58 AM, Bandan Das wrote:
> David Hildenbrand writes:
>
>> On 10.07.2017 22:49, Bandan Das wrote:
>>> When L2 uses vmfunc, L0 utilizes the associated vmexit to
>>> emulate a switching of the ept pointer by reloading the
>>> guest MMU.
>>>
>>> Signed-off-by: Paolo Bonzini
Why do we expect the VM_EXIT_INTR_INFO and EXIT_QUALIFICATION fields
of the VMCS to have the correct values for the injected exception?
On Mon, Jun 5, 2017 at 5:19 AM, Wanpeng Li wrote:
> 2017-06-05 20:07 GMT+08:00 Paolo Bonzini :
>>
>>
>> On 03/06/2017 05:21, Wanpeng Li wrote:
>>> Commit 0b6ac34
On Wed, Jul 19, 2017 at 7:31 PM, Wanpeng Li wrote:
> Hi Jim,
> 2017-07-19 2:47 GMT+08:00 Jim Mattson :
>> Why do we expect the VM_EXIT_INTR_INFO and EXIT_QUALIFICATION fields
>> of the VMCS to have the correct values for the injected exception?
>
> Good point, I th
On Fri, Jul 21, 2017 at 1:39 AM, Wanpeng Li wrote:
> Hi Jim,
> 2017-07-21 3:16 GMT+08:00 Jim Mattson :
>> On Wed, Jul 19, 2017 at 7:31 PM, Wanpeng Li wrote:
>>> Hi Jim,
>>> 2017-07-19 2:47 GMT+08:00 Jim Mattson :
>>>> Why do we expect the VM_EXIT_INTR_
I think the ancillary data for #DB and #PF should be added to
kvm_queued_exception and plumbed through to where it's needed. Vector
number and error code are not sufficient to describe a #DB or #PF.
On Sat, Jul 22, 2017 at 5:29 PM, Wanpeng Li wrote:
> 2017-07-22 22:25 GMT+08:00 Jim
try path too.
>
> Fixes: d28b387fb74da95d69d2615732f50cceb38e9a4d
> Cc: x...@kernel.org
> Cc: Radim Krčmář
> Cc: KarimAllah Ahmed
> Cc: David Woodhouse
> Cc: Jim Mattson
> Cc: Thomas Gleixner
> Cc: Ingo Molnar
> Cc: sta...@vger.kernel.org
> Signed-off-by: P
72f6af9506
> Cc: x...@kernel.org
> Cc: Radim Krčmář
> Cc: KarimAllah Ahmed
> Cc: David Woodhouse
> Cc: Jim Mattson
> Cc: Thomas Gleixner
> Cc: Ingo Molnar
> Cc: sta...@vger.kernel.org
> Signed-off-by: Paolo Bonzini
Wasn't this already fixed by 206587a9fb76 ("X86/nVMX: Properly set
spec_ctrl and pred_cmd before merging MSRs")?
ycles slower.
>
> Cc: x...@kernel.org
> Cc: Radim Krčmář
> Cc: KarimAllah Ahmed
> Cc: David Woodhouse
> Cc: Jim Mattson
> Cc: Thomas Gleixner
> Cc: Ingo Molnar
> Cc: sta...@vger.kernel.org
> Signed-off-by: Paolo Bonzini
Reviewed-by: Jim Mattson
On Wed, Feb 14, 2018 at 3:29 PM, David Woodhouse wrote:
> +#define alternative_msr_write(_msr, _val, _feature)\
> + asm volatile(ALTERNATIVE("",\
> +"movl %[msr], %%ecx\n\t" \
> +"m
On Thu, Feb 1, 2018 at 1:59 PM, KarimAllah Ahmed wrote:
> @@ -3684,6 +3696,22 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct
> msr_data *msr)
> case MSR_IA32_TSC:
> kvm_write_tsc(vcpu, msr);
> break;
> + case MSR_IA32_PRED_CMD:
> +
On Tue, Feb 6, 2018 at 9:29 AM, David Woodhouse wrote:
> @@ -8828,6 +8890,15 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu
> *vcpu)
>
> vmx_arm_hv_timer(vcpu);
>
> + /*
> +* If this vCPU has touched SPEC_CTRL, restore the guest's value if
> +* it's non-zero.
On Tue, Feb 6, 2018 at 9:29 AM, David Woodhouse wrote:
> @@ -8946,6 +9017,27 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu
> *vcpu)
> #endif
> );
>
> + /*
> +* We do not use IBRS in the kernel. If this vCPU has used the
> +* SPEC_CTRL MSR it may have
to the vmcs12 host
fields that actually match the current host values. There is nothing
in the architecture that would require this behavior.
On Wed, Feb 7, 2018 at 10:22 PM, Wanpeng Li wrote:
> 2018-02-08 0:57 GMT+08:00 Jim Mattson :
>> vmcs12->host_cr[34] does not contain the up-to-d
On Thu, Feb 8, 2018 at 7:29 AM, Jim Mattson wrote:
> Similarly, the correct L1 CR4 value should be in vmcs01's CR4
> read shadow field.
Sorry; that's wrong. L1's CR4 value has to be reconstructed from the
vmcs01 guest CR4 field and CR4 shadow field using the cr4 guest/host
On Fri, Feb 9, 2018 at 4:15 AM, KarimAllah Ahmed wrote:
> Can you elaborate a bit? I do not really understand what is the concern.
I can't find the posting that gave me this impression. The only thing
I can find is the following in Documentation/vm/highmem.txt:
(*) kmap(). This permits a shor
just invert the result of msr_write_intercepted_l01 to implement the
> correct semantics.
>
> Fixes: 086e7d4118cc ("KVM: VMX: Allow direct access to MSR_IA32_SPEC_CTRL")
> Signed-off-by: KarimAllah Ahmed
> Cc: Paolo Bonzini
> Cc: Radim Krčmář
> Cc: k...@vger.kernel.
On Thu, Feb 8, 2018 at 2:53 PM, KarimAllah Ahmed wrote:
> ... otherwise we will just be running with the L1 MSR BITMAP!
>
> It does not seem that we ever update the MSR_BITMAP when the nested guest
> is running. The only place where we update the MSR_BITMAP field in VMCS is
> for the L1 guest!
>
>
On Fri, Feb 9, 2018 at 3:41 PM, KarimAllah Ahmed wrote:
> I assume you are referring to this:
>
> https://patchwork.kernel.org/patch/10194819/
>
> .. which is now:
>
> commit 904e14fb7cb9 ("KVM: VMX: make MSR bitmaps per-VCPU")
>
> right?
>
> If this is the case, then I do not see where the MSR_B
Tim Chen
>> Cc: Linus Torvalds
>> Cc: Andrea Arcangeli
>> Cc: Andi Kleen
>> Cc: Thomas Gleixner
>> Cc: Dan Williams
>> Cc: Jun Nakajima
>> Cc: Andy Lutomirski
>> Cc: Greg KH
>> Cc: Paolo Bonzini
>> Cc: Ashok Raj
>> Reviewed
inux-kernel@vger.kernel.org
> Reviewed-by: Paolo Bonzini
> Reviewed-by: Konrad Rzeszutek Wilk
> Signed-off-by: KarimAllah Ahmed
> Signed-off-by: David Woodhouse
Reviewed-by: Jim Mattson
di Kleen
> Cc: Andrea Arcangeli
> Cc: Linus Torvalds
> Cc: Tim Chen
> Cc: Thomas Gleixner
> Cc: Dan Williams
> Cc: Jun Nakajima
> Cc: Paolo Bonzini
> Cc: David Woodhouse
> Cc: Greg KH
> Cc: Andy Lutomirski
> Cc: Ashok Raj
> Signed-off-by: KarimAllah Ahmed
> Signed-off-by: David Woodhouse
Reviewed-by: Jim Mattson
See the pseudocode for VMLAUNCH/VMRESUME in volume 3 of the SDM.
On Wed, Nov 8, 2017 at 1:47 PM Jim Mattson wrote:
> I realize now that there are actually many other problems with
> deferring some control field checks to the hardware VM-entry of
> vmcs02. When there is an invalid contro
301 - 400 of 448 matches
Mail list logo