On Tue, 2025-03-25 at 11:01 -0700, James Houghton wrote:
> On Mon, Mar 24, 2025 at 6:57 PM Maxim Levitsky wrote:
> > Add an option to skip sanity check of number of still idle pages,
> > and set it by default to skip, in case hypervisor or NUMA balancing
> > is detected.
&
explaining the second call to fscanf().
Signed-off-by: Sean Christopherson
Signed-off-by: Maxim Levitsky
---
tools/testing/selftests/kvm/lib/test_util.c | 35 ++---
1 file changed, 24 insertions(+), 11 deletions(-)
diff --git a/tools/testing/selftests/kvm/lib/test_util.c
b
automatically.
V2: adopted Sean's suggestions.
Best regards,
Maxim Levitsky
Maxim Levitsky (1):
KVM: selftests: access_tracking_perf_test: add option to skip the
sanity check
Sean Christopherson (1):
KVM: selftests: Extract guts of THP accessor to standalone sysfs
he
Add an option to skip sanity check of number of still idle pages,
and set it by default to skip, in case hypervisor or NUMA balancing
is detected.
Signed-off-by: Maxim Levitsky
---
.../selftests/kvm/access_tracking_perf_test.c | 33 ---
.../testing/selftests/kvm/include
Add an option to skip sanity check of number of still idle pages,
and force it on, in case hypervisor or NUMA balancing
is detected.
Signed-off-by: Maxim Levitsky
---
.../selftests/kvm/access_tracking_perf_test.c | 23 +--
.../testing/selftests/kvm/include/test_util.h | 1
ted to memory.
The only exception to this rule is when the guest hits a not present EPT
entry, in which case KVM first reads (backward) the PML log, dumps it to
the dirty ring, and *then* sets up a SPTE entry with A/D bits set, and logs
this to the dirty ring, thus making the entry be the last one
Rename PML_ENTITY_NUM to PML_LOG_NR_ENTRIES
Add PML_HEAD_INDEX to specify the first entry that CPU writes.
No functional change intended.
Suggested-by: Sean Christopherson
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/vmx/main.c | 2 +-
arch/x86/kvm/vmx/nested.c | 2 +-
arch/x86/kvm/vmx
Reverse the order in which
the PML log is read to align more closely to the hardware. It should
not affect regular users of the dirty logging but it fixes a unit test
specific assumption in the dirty_log_test dirty-ring mode.
Best regards,
Maxim Levitsky
Maxim Levitsky (2):
KVM: VMX
On Wed, 2024-11-27 at 17:34 -0800, Sean Christopherson wrote:
> Refactor the kvm_cpu_cap_init() macro magic to collect supported features
> in a local variable instead of passing them to the macro as a "mask". As
> pointed out by Maxim, relying on macros to "retur
_SUPPORTED_CPUID
> (and the emulated version) at the beginning and end of the series, on AMD
> and Intel hosts that should support almost every feature known to KVM.
>
> Maxim, I did my best to incorporate all of your feedback, and when we
> disagreed, I tried to find an approach tha
On Thu, 2024-12-12 at 22:19 -0800, Sean Christopherson wrote:
> On Thu, Dec 12, 2024, Maxim Levitsky wrote:
> > On Wed, 2024-12-11 at 16:44 -0800, Sean Christopherson wrote:
> > > But, I can't help but wonder why KVM bothers emulating PML. I can
> > > appreciate
On Wed, 2024-12-11 at 16:44 -0800, Sean Christopherson wrote:
> On Wed, Dec 11, 2024, Maxim Levitsky wrote:
> > X86 spec specifies that the CPU writes to the PML log 'backwards'
>
> SDM, because this is Intel specific.
True.
>
> > or in other words, it first wr
, which will lead to the test failure because once the write is
finally committed it may have a very outdated iteration value.
Detect and avoid this case.
Signed-off-by: Maxim Levitsky
---
tools/testing/selftests/kvm/dirty_log_test.c | 52 +++-
1 file changed, 50 insertions(+), 2
s390 specific workaround causes the dirty-log mode of the test to dirty all
the guest memory on the first iteration which is very slow when
run nested.
Limit this workaround to s390x.
Signed-off-by: Maxim Levitsky
---
tools/testing/selftests/kvm/dirty_log_test.c | 2 ++
1 file changed, 2
the dirty ring.
Signed-off-by: Maxim Levitsky
---
tools/testing/selftests/kvm/dirty_log_test.c | 25 +---
1 file changed, 17 insertions(+), 8 deletions(-)
diff --git a/tools/testing/selftests/kvm/dirty_log_test.c
b/tools/testing/selftests/kvm/dirty_log_test.c
index f60e2aceeae0
and logs
this to the dirty ring, thus making the entry be the last one in the
dirty ring.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/vmx/vmx.c | 32 +---
arch/x86/kvm/vmx/vmx.h | 1 +
2 files changed, 22 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kv
Or even better, it's possible to manually patch the test to not wait at all
(effectively setting iteration time to 0), then it fails pretty fast.
Best regards,
Maxim Levitsky
Maxim Levitsky (4):
KVM: VMX: read the PML log in the same order as it was written
KVM: selftests: dirty_log
When memslot_perf_test is run nested, first iteration of test_memslot_rw_loop
testcase, sometimes takes more than 2 seconds due to build of shadow page
tables.
Following iterations are fast.
To be on the safe side, bump the timeout to 10 seconds.
Signed-off-by: Maxim Levitsky
---
tools
On Fri, 2021-04-02 at 19:38 +0200, Paolo Bonzini wrote:
> On 01/04/21 15:54, Maxim Levitsky wrote:
> > Hi!
> >
> > I would like to publish two debug features which were needed for other stuff
> > I work on.
> >
> > One is the reworked lx-symbols scri
On Fri, 2021-04-02 at 17:27 +, Sean Christopherson wrote:
> On Thu, Apr 01, 2021, Maxim Levitsky wrote:
> > Similar to the rest of guest page accesses after migration,
> > this should be delayed to KVM_REQ_GET_NESTED_STATE_PAGES
> > request.
>
> FWIW, I still object
On Mon, 2021-04-05 at 17:01 +, Sean Christopherson wrote:
> On Thu, Apr 01, 2021, Maxim Levitsky wrote:
> > if new KVM_*_SREGS2 ioctls are used, the PDPTRs are
> > part of the migration state and thus are loaded
> > by those ioctls.
> >
> > Signed-off-by: Maxi
s well.
Suggested-by: Paolo Bonzini
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/nested.c | 40 +--
1 file changed, 22 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 8523f60adb92..ac5e3e17bda4 100
Small refactoring that will be used in the next patch.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/kvm_cache_regs.h | 7 +++
arch/x86/kvm/svm/svm.c| 6 ++
2 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm
if new KVM_*_SREGS2 ioctls are used, the PDPTRs are
part of the migration state and thus are loaded
by those ioctls.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/nested.c | 15 +--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch
if new KVM_*_SREGS2 ioctls are used, the PDPTRs are
part of the migration state and thus are loaded
by those ioctls.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/vmx/nested.c | 12 +++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86
later test currently fails on Intel (regardless of my patches).
Finally patch 2 in this patch series fixes a rare L0 kernel oops,
which I can trigger by migrating a hyper-v machine.
Best regards,
Maxim Levitskky
Maxim Levitsky (6):
KVM: nVMX: delay loading of PDPT
VENTS)
New capability, KVM_CAP_SREGS2 is added to signal
userspace of this ioctl.
Currently only implemented on x86.
Signed-off-by: Maxim Levitsky
---
Documentation/virt/kvm/api.rst | 43 ++
arch/x86/include/asm/kvm_host.h | 7 ++
arch/x86/include/uapi/asm/kvm.h | 13 +++
arch/x8
Similar to the rest of guest page accesses after migration,
this should be delayed to KVM_REQ_GET_NESTED_STATE_PAGES
request.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/vmx/nested.c | 14 +-
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/vmx/nested.c
Split the check for having a vmexit handler to
svm_check_exit_valid, and make svm_handle_invalid_exit
only handle a vmexit that is already not valid.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/svm.c | 17 +
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a
Store the supported bits into KVM_GUESTDBG_VALID_MASK
macro, similar to how arm does this.
Signed-off-by: Maxim Levitsky
---
arch/x86/include/asm/kvm_host.h | 9 +
arch/x86/kvm/x86.c | 2 ++
2 files changed, 11 insertions(+)
diff --git a/arch/x86/include/asm/kvm_host.h b
ceptions
can still happen, but at least this eliminates the common
case.
Signed-off-by: Maxim Levitsky
---
Documentation/virt/kvm/api.rst | 1 +
arch/x86/include/asm/kvm_host.h | 3 ++-
arch/x86/include/uapi/asm/kvm.h | 1 +
arch/x86/kvm/x86.c | 4
4 files changed, 8 inser
Move KVM_GUESTDBG_VALID_MASK to kvm_host.h
and use it to return the value of this capability.
Compile tested only.
Signed-off-by: Maxim Levitsky
---
arch/arm64/include/asm/kvm_host.h | 4
arch/arm64/kvm/arm.c | 2 ++
arch/arm64/kvm/guest.c| 5 -
3 files changed
Currently #TS interception is only done once.
Also exception interception is not enabled for SEV guests.
Signed-off-by: Maxim Levitsky
---
arch/x86/include/asm/kvm_host.h | 2 +
arch/x86/kvm/svm/svm.c | 70 +
arch/x86/kvm/svm/svm.h | 6
o the guest memory
to avoid confusing errors.
(new in V2)
Signed-off-by: Maxim Levitsky
---
kernel/module.c | 8 +-
scripts/gdb/linux/symbols.py | 203 +++
2 files changed, 143 insertions(+), 68 deletions(-)
diff --git a/kernel/module.c b/kernel/m
.gd22...@pd.tnic/
CC: Borislav Petkov
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/x86.c | 3 +++
arch/x86/kvm/x86.h | 2 ++
2 files changed, 5 insertions(+)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 3627ce8fe5bb..1a51031d64d8 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm
Define KVM_GUESTDBG_VALID_MASK and use it to implement this capabiity.
Compile tested only.
Signed-off-by: Maxim Levitsky
---
arch/s390/include/asm/kvm_host.h | 4
arch/s390/kvm/kvm-s390.c | 3 +++
2 files changed, 7 insertions(+)
diff --git a/arch/s390/include/asm/kvm_host.h b
This capability will allow the user to know which KVM_GUESTDBG_* bits
are supported.
Signed-off-by: Maxim Levitsky
---
Documentation/virt/kvm/api.rst | 3 +++
include/uapi/linux/kvm.h | 1 +
2 files changed, 4 insertions(+)
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt
well.
Best regards,
Maxim Levitsky
Maxim Levitsky (9):
scripts/gdb: rework lx-symbols gdb script
KVM: introduce KVM_CAP_SET_GUEST_DEBUG2
KVM: x86: implement KVM_CAP_SET_GUEST_DEBUG2
KVM: aarch64: implement KVM_CAP_SET_GUEST_DEBUG2
KVM: s390x: implement KVM_CAP_SET_GUEST_DEBUG2
On Thu, 2021-04-01 at 14:16 +0300, Maxim Levitsky wrote:
> This is a result of a deep rabbit hole dive in regard to why
> currently the nested migration of 32 bit guests
> is totally broken on AMD.
Please ignore this patch series, I didn't update the patch version.
Best regards,
rare edge case), then
virtual vmload/save is force disabled.
V2: incorporated review feedback from Paulo.
Best regards,
Maxim Levitsky
Maxim Levitsky (2):
KVM: x86: add guest_cpuid_is_intel
KVM: nSVM: improve SYSENTER emulation on AMD
arch/x86/kvm/cpuid.h | 8
arch/x86/kvm
This is similar to existing 'guest_cpuid_is_amd_or_hygon'
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/cpuid.h | 8
1 file changed, 8 insertions(+)
diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index 2a0c5064497f..ded84d244f19 100644
--- a/arch/x86/kvm/cpuid.h
s of SYSENTER msrs were stored in
the migration stream if L1 changed these msrs with
vmload prior to L2 entry.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/svm.c | 99 +++---
arch/x86/kvm/svm/svm.h | 6 +--
2 files changed, 68 insertions(+), 37 deletions(-)
On Thu, 2021-04-01 at 19:05 +0200, Paolo Bonzini wrote:
> On 01/04/21 16:38, Maxim Levitsky wrote:
> > Injected interrupts/nmi should not block a pending exception,
> > but rather be either lost if nested hypervisor doesn't
> > intercept the pending exception (as in s
s of SYSENTER msrs were stored in
the migration stream if L1 changed these msrs with
vmload prior to L2 entry.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/svm.c | 99 +++---
arch/x86/kvm/svm/svm.h | 6 +--
2 files changed, 68 insertions(+), 37 deletions(-)
clone of "kernel-starship-5.12.unstable"
Maxim Levitsky (4):
KVM: x86: pending exceptions must not be blocked by an injected event
KVM: x86: separate pending and injected exception
KVM: x86: correctly merge pending and injected exception
KVM: x86: remove tweaking of inject_
On Thu, 2021-04-01 at 16:44 +0200, Paolo Bonzini wrote:
> Just a quick review on the API:
>
> On 01/04/21 16:18, Maxim Levitsky wrote:
> > +struct kvm_sregs2 {
> > + /* out (KVM_GET_SREGS2) / in (KVM_SET_SREGS2) */
> > + struct kvm_segment cs, ds, es, fs, gs, ss;
&g
This is similar to existing 'guest_cpuid_is_amd_or_hygon'
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/cpuid.h | 8
1 file changed, 8 insertions(+)
diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index 2a0c5064497f..ded84d244f19 100644
--- a/arch/x86/kvm/cpuid.h
rare edge case), then
virtual vmload/save is force disabled.
V2: incorporated review feedback from Paulo.
Best regards,
Maxim Levitsky
Maxim Levitsky (2):
KVM: x86: add guest_cpuid_is_intel
KVM: nSVM: improve SYSENTER emulation on AMD
arch/x86/kvm/cpuid.h | 8
arch/x86/kvm
done by vendor code using new nested callback
'deliver_exception_as_vmexit'
The kvm_deliver_pending_exception is called after each VM exit,
and prior to VM entry which ensures that during userspace VM exits,
only injected exception can be in a raised state.
Signed-off-by: Maxim Levit
This is no longer needed since page faults can now be
injected as regular exceptions in all the cases.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/nested.c | 20
arch/x86/kvm/vmx/nested.c | 23 ---
2 files changed, 43 deletions(-)
diff --git a
The only reason for an exception to be blocked is when nested run
is pending (and that can't really happen currently
but still worth checking for).
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/nested.c | 8 +++-
arch/x86/kvm/vmx/nested.c | 10 --
2 files changed, 15 insert
Use 'pending_exception' and 'injected_exception' fields
to store the pending and the injected exceptions.
After this patch still only one is active, but
in the next patch both could co-exist in some cases.
Signed-off-by: Maxim Levitsky
---
arch/x86/include/asm/kvm_host.h |
On Thu, 2021-04-01 at 15:03 +0200, Vitaly Kuznetsov wrote:
> Maxim Levitsky writes:
>
> > Currently to support Intel->AMD migration, if CPU vendor is GenuineIntel,
> > we emulate the full 64 value for MSR_IA32_SYSENTER_{EIP|ESP}
> > msrs, and we also emulate the syse
On 2021-03-22 16:21, Lv Yunlong wrote:
My static analyzer tool reported a potential uaf in
mlx5e_ktls_del_rx. In this function, if the condition
cancel_work_sync(&resync->work) is true, and then
priv_rx could be freed. But priv_rx is used later.
I'm unfamiliar with how this function works. Maybe
On Thu, 2021-03-18 at 16:35 +, Sean Christopherson wrote:
> On Thu, Mar 18, 2021, Joerg Roedel wrote:
> > On Thu, Mar 18, 2021 at 11:24:25AM +0200, Maxim Levitsky wrote:
> > > But again this is a debug feature, and it is intended to allow the user
> > > t
On Thu, 2021-03-18 at 10:19 +0100, Joerg Roedel wrote:
> On Tue, Mar 16, 2021 at 12:51:20PM +0200, Maxim Levitsky wrote:
> > I agree but what is wrong with that?
> > This is a debug feature, and it only can be enabled by the root,
> > and so someone might actually wan
On Tue, 2021-03-16 at 18:01 +0100, Jan Kiszka wrote:
> On 16.03.21 17:50, Sean Christopherson wrote:
> > On Tue, Mar 16, 2021, Maxim Levitsky wrote:
> > > On Tue, 2021-03-16 at 16:31 +0100, Jan Kiszka wrote:
> > > > Back then, when I was hacking on the gdb-stub and KV
On Tue, 2021-03-16 at 14:46 +0100, Jan Kiszka wrote:
> On 16.03.21 13:34, Maxim Levitsky wrote:
> > On Tue, 2021-03-16 at 12:27 +0100, Jan Kiszka wrote:
> > > On 16.03.21 11:59, Maxim Levitsky wrote:
> > > > On Tue, 2021-03-16 at 10:16 +0100, Jan Kiszka wrote:
>
On Tue, 2021-03-16 at 14:38 +0100, Jan Kiszka wrote:
> On 15.03.21 23:10, Maxim Levitsky wrote:
> > Fix several issues that are present in lx-symbols script:
> >
> > * Track module unloads by placing another software breakpoint at
> > 'free_module'
> >
On Tue, 2021-03-16 at 12:27 +0100, Jan Kiszka wrote:
> On 16.03.21 11:59, Maxim Levitsky wrote:
> > On Tue, 2021-03-16 at 10:16 +0100, Jan Kiszka wrote:
> > > On 16.03.21 00:37, Sean Christopherson wrote:
> > > > On Tue, Mar 16, 2021, Maxim Levitsky wrote:
> >
On Tue, 2021-03-16 at 10:16 +0100, Jan Kiszka wrote:
> On 16.03.21 00:37, Sean Christopherson wrote:
> > On Tue, Mar 16, 2021, Maxim Levitsky wrote:
> > > This change greatly helps with two issues:
> > >
> > > * Resuming from a breakpoint is much more re
On Mon, 2021-03-15 at 16:37 -0700, Sean Christopherson wrote:
> On Tue, Mar 16, 2021, Maxim Levitsky wrote:
> > This change greatly helps with two issues:
> >
> > * Resuming from a breakpoint is much more reliable.
> >
> > When resuming execution from a br
On Tue, 2021-03-16 at 09:32 +0100, Joerg Roedel wrote:
> Hi Maxim,
>
> On Tue, Mar 16, 2021 at 12:10:20AM +0200, Maxim Levitsky wrote:
> > -static int (*const svm_exit_handlers[])(struct kvm_vcpu *vcpu) = {
> > +static int (*svm_exit_handlers[])(struct kvm_vcpu *vcpu) = {
&
On Tue, 2021-03-16 at 09:16 +0100, Paolo Bonzini wrote:
> On 15/03/21 19:19, Maxim Levitsky wrote:
> > On Mon, 2021-03-15 at 18:56 +0100, Paolo Bonzini wrote:
> > > On 15/03/21 18:43, Maxim Levitsky wrote:
> > > > +
s is based on an idea first shown here:
https://patchwork.kernel.org/project/kvm/patch/20160301192822.gd22...@pd.tnic/
CC: Borislav Petkov
Signed-off-by: Maxim Levitsky
---
arch/x86/include/asm/kvm_host.h | 2 +
arch/x86/kvm/svm/svm.c | 77 -
arch/x86/kvm
ge is only active when guest is debugged, it won't affect
KVM running normal 'production' VMs.
Signed-off-by: Maxim Levitsky
Tested-by: Stefano Garzarella
---
arch/x86/kvm/x86.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/
feature on Intel as well.
Best regards,
Maxim Levitsky
Maxim Levitsky (3):
scripts/gdb: rework lx-symbols gdb script
KVM: x86: guest debug: don't inject interrupts while single stepping
KVM: SVM: allow to intercept all exceptions for debug
arch/x86/include/asm/kvm_host.h | 2 +
st kernel panic as soon as it skips over the 'int3'
instruction and executes the garbage tail of the optcode on which
the breakpoint was placed.
Signed-off-by: Maxim Levitsky
---
kernel/module.c | 8 ++-
scripts/gdb/linux/symbols.py | 106 +
On Mon, 2021-03-15 at 18:56 +0100, Paolo Bonzini wrote:
> On 15/03/21 18:43, Maxim Levitsky wrote:
> > + if (!guest_cpuid_is_intel(vcpu)) {
> > + /*
> > +* If hardware supports Virtual VMLOAD VMSAVE then enable it
> > +* in VMCB an
ual vmload/save is
force disabled.
Best regards,
Maxim Levitsky
Maxim Levitsky (2):
KVM: x86: add guest_cpuid_is_intel
KVM: nSVM: improve SYSENTER emulation on AMD
arch/x86/kvm/cpuid.h | 8
arch/x86/kvm/svm/svm.c | 97 --
arch/x86/kvm
This is similar to existing 'guest_cpuid_is_amd_or_hygon'
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/cpuid.h | 8
1 file changed, 8 insertions(+)
diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index 2a0c5064497f3..ded84d244f19f 100644
--- a/arch/x86/kvm/cpu
s nested migration of 32 bit nested guests which was broken due
to incorrect cached values of these msrs being read if L1 changed these
msrs with vmload prior to L2 entry.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/svm.c | 97 --
arch/x86/kvm/svm/sv
On 2021-03-10 19:03, Eric Dumazet wrote:
On 3/10/21 3:54 PM, Maxim Mikityanskiy wrote:
On 2021-03-09 17:20, Eric Dumazet wrote:
On 3/9/21 4:13 PM, syzbot wrote:
Hello,
syzbot found the following issue on:
HEAD commit: 38b5133a octeontx2-pf: Fix otx2_get_fecparam()
git tree: net
: Maxim Mikityanskiy
Date: Tue Jan 19 12:08:13 2021 +
sch_htb: Hierarchical QoS hardware offload
bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=13ab12ecd0
final oops: https://syzkaller.appspot.com/x/report.txt?x=106b12ecd0
console output: https
On Tue, 2021-03-09 at 14:12 +0100, Paolo Bonzini wrote:
> On 09/03/21 11:09, Maxim Levitsky wrote:
> > What happens if mmio generation overflows (e.g if userspace keeps on
> > updating the memslots)?
> > In theory if we have a SPTE with a stale generation, it can became valid
st the comment above as well if you change these */
> -static_assert(MMIO_SPTE_GEN_LOW_BITS == 9 && MMIO_SPTE_GEN_HIGH_BITS == 11);
> +static_assert(MMIO_SPTE_GEN_LOW_BITS == 8 && MMIO_SPTE_GEN_HIGH_BITS == 11);
>
> #define MMIO_SPTE_GEN_LOW_SHIFT (MMIO_SPTE_GEN_
On Mon, 2021-03-08 at 09:18 -0800, Sean Christopherson wrote:
> On Mon, Mar 08, 2021, Maxim Levitsky wrote:
> > On Thu, 2021-03-04 at 18:16 -0800, Sean Christopherson wrote:
> > > Directly connect the 'npt' param to the 'npt_enabled' variable so that
> &
(set_spte_ret & SET_SPTE_WRITE_PROTECTED_PT) {
if (write_fault)
ret = RET_PF_EMULATE;
It is a hack since it only happens to work because we eventually
unprotect the guest mmu pages when we detect write flooding to them.
Still performance wise, my win98 gu
On Thu, 2021-02-25 at 17:05 +0100, Paolo Bonzini wrote:
> On 25/02/21 16:41, Maxim Levitsky wrote:
> > Injected events should not block a pending exception, but rather,
> > should either be lost or be delivered to the nested hypervisor as part of
> > exitintinfo/IDT_VECTORIN
On Thu, 2021-02-25 at 17:41 +0200, Maxim Levitsky wrote:
> clone of "kernel-starship-5.11"
>
> Maxim Levitsky (4):
> KVM: x86: determine if an exception has an error code only when
> injecting it.
> KVM: x86: mmu: initialize fault.async_page_fault in wa
Signed-off-by: Maxim Levitsky
---
arch/x86/include/asm/kvm_host.h | 23 +-
arch/x86/include/uapi/asm/kvm.h | 14 +-
arch/x86/kvm/svm/nested.c | 62 +++---
arch/x86/kvm/svm/svm.c | 8 +-
arch/x86/kvm/vmx/nested.c | 114 +-
arch/x86/kvm/vmx/vmx.c | 14
Injected events should not block a pending exception, but rather,
should either be lost or be delivered to the nested hypervisor as part of
exitintinfo/IDT_VECTORING_INFO
(if nested hypervisor intercepts the pending exception)
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/nested.c | 7
This field was left uninitialized by a mistake.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/mmu/paging_tmpl.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index d9f66cc459e84..3dc9a25772bd8 100644
--- a/arch/x86/kvm/mmu
A page fault can be queued while vCPU is in real paged mode on AMD, and
AMD manual asks the user to always intercept it
(otherwise result is undefined).
The resulting VM exit, does have an error code.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/x86.c | 13 +
1 file changed, 9
clone of "kernel-starship-5.11"
Maxim Levitsky (4):
KVM: x86: determine if an exception has an error code only when
injecting it.
KVM: x86: mmu: initialize fault.async_page_fault in walk_addr_generic
KVM: x86: pending exception must be be injected even with an injected
e
[Answer: the mode_switch testcase fails, but I haven't
> checked why].
I agree with all of this. I'll see why this code is needed (it is needed,
since I once removed it accidentaly on VMX, and it broke nesting with ept=0,
in exact the same way as it was broken on AMD).
I''l debug this a bit to see if I can make it work as you suggest.
Best regards,
Maxim Levitsky
>
>
> Paolo
>
On Wed, 2021-02-17 at 09:29 -0800, Sean Christopherson wrote:
> On Wed, Feb 17, 2021, Maxim Levitsky wrote:
> > This fixes a (mostly theoretical) bug which can happen if ept=0
> > on host and we run a nested guest which triggers a mmu context
> > reset while running nested.
On Wed, 2021-02-17 at 17:06 +0100, Paolo Bonzini wrote:
> On 17/02/21 15:57, Maxim Levitsky wrote:
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > index b3e36dc3f164..e428d69e21c0 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx
On Wed, 2021-02-17 at 16:57 +0200, Maxim Levitsky wrote:
> In case of npt=0 on host,
> nSVM needs the same .inject_page_fault tweak as VMX has,
> to make sure that shadow mmu faults are injected as vmexits.
>
> Signed-off-by: Maxim Levitsky
> ---
> arch/x86/
Just like all other nested memory accesses, after a migration loading
PDPTRs should be delayed to first VM entry to ensure
that guest memory is fully initialized.
Just move the call to nested_vmx_load_cr3 to nested_get_vmcs12_pages
to implement this.
Signed-off-by: Maxim Levitsky
---
arch/x86
-by: Paolo Bonzini
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/nested.c | 40 +--
1 file changed, 22 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 53b9037259b5..ebc7dfaa9f13 100644
--- a/arch/x8
In case of npt=0 on host,
nSVM needs the same .inject_page_fault tweak as VMX has,
to make sure that shadow mmu faults are injected as vmexits.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/nested.c | 18 ++
arch/x86/kvm/svm/svm.c| 5 -
arch/x86/kvm/svm/svm.h
This way trace will capture all the nested mode entries
(including entries after migration, and from smm)
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/nested.c | 26 ++
1 file changed, 14 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch
This fixes a (mostly theoretical) bug which can happen if ept=0
on host and we run a nested guest which triggers a mmu context
reset while running nested.
In this case the .inject_page_fault callback will be lost.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/vmx/nested.c | 8 +---
arch
This callback will be used to tweak the mmu context
in arch specific code after it was reset.
Signed-off-by: Maxim Levitsky
---
arch/x86/include/asm/kvm-x86-ops.h | 1 +
arch/x86/include/asm/kvm_host.h| 2 ++
arch/x86/kvm/mmu/mmu.c | 2 ++
arch/x86/kvm/svm/svm.c | 6
eventually crashed but I strongly suspect a bug in shadow mmu,
which I track separately.
(see below for full explanation).
This patch series is based on kvm/queue branch.
Best regards,
Maxim Levitsky
PS: The shadow mmu bug which I spent most of this week on:
In my testing I am not able to
trace_kvm_exit prints this value (using vmx_get_exit_info)
so it makes sense to read it before the trace point.
Fixes: dcf068da7eb2 ("KVM: VMX: Introduce generic fastpath handler")
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/vmx/vmx.c | 4 +++-
1 file changed, 3 insertions(+),
ell.
Suggested-by: Paolo Bonzini
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/nested.c | 8
1 file changed, 8 insertions(+)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 519fe84f2100..c209f1232928 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x8
lantronics: Update to map volume up/down controls")
Signed-off-by: Maxim Mikityanskiy
---
People from Plantronics, maybe you could advise on a better fix than
filtering duplicate events on driver level? Do you happen to know why
they occur in the first place? Are any other headsets affected?
_free
with snd_card_free_when_closed, that doesn't wait until all references
are released, allowing suspend to progress.
Fixes: 63ddf68de52e ("[media] usbtv: add audio support")
Signed-off-by: Maxim Mikityanskiy
---
drivers/media/usb/usbtv/usbtv-audio.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion
1 - 100 of 1019 matches
Mail list logo