Use guard(mutex) to clean up irqbypass's error handling.
Reviewed-by: Kevin Tian
Acked-by: Michael S. Tsirkin
Signed-off-by: Sean Christopherson
---
virt/lib/irqbypass.c | 38 ++
1 file changed, 10 insertions(+), 28 deletions(-)
diff --git a/vir
Use the paired consumer/producer information to disconnect IRQ bypass
producers/consumers in O(1) time (ignoring the cost of __disconnect()).
Reviewed-by: Kevin Tian
Acked-by: Michael S. Tsirkin
Signed-off-by: Sean Christopherson
---
virt/lib/irqbypass.c | 48
soon as a connection is possible.
Signed-off-by: Sean Christopherson
---
drivers/vfio/pci/vfio_pci_intrs.c | 3 +--
drivers/vhost/vdpa.c | 4 ++--
include/linux/irqbypass.h | 2 +-
virt/lib/irqbypass.c | 5 -
4 files changed, 8 insertions(+), 6 deletions(
-by: Yong He
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217379
Link: https://lore.kernel.org/all/20230801115646.33990-1-lik...@tencent.com
Reviewed-by: Kevin Tian
Acked-by: Michael S. Tsirkin
Signed-off-by: Sean Christopherson
---
include/linux/irqbypass.h | 4 ---
virt/lib/irqbypass.c
eventfd_ctx pointer (for all intents and purposes) unnecessarily
obfuscates the code and makes it more brittle.
Reviewed-by: Kevin Tian
Acked-by: Michael S. Tsirkin
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/x86.c| 4 +--
drivers/vfio/pci/vfio_pci_intrs.c | 9
Explicitly track IRQ bypass producer:consumer bindings. This will allow
making removal an O(1) operation; searching through the list to find
information that is trivially tracked (and useful for debug) is wasteful.
Reviewed-by: Kevin Tian
Acked-by: Michael S. Tsirkin
Signed-off-by: Sean
Drop superfluous might_sleep() annotations from irqbypass, mutex_lock()
provides all of the necessary tracking.
Reviewed-by: Kevin Tian
Acked-by: Michael S. Tsirkin
Signed-off-by: Sean Christopherson
---
virt/lib/irqbypass.c | 8
1 file changed, 8 deletions(-)
diff --git a/virt/lib
. E.g.
if try_module_get() fails because irqbypass.ko is being unloaded, then the
kernel has already hit a use-after-free by virtue of executing code whose
lifecycle is tied to irqbypass.ko.
Reviewed-by: Kevin Tian
Acked-by: Michael S. Tsirkin
Signed-off-by: Sean Christopherson
---
virt/lib/irqbyp
tmp == consumer" check from patch 3. [Kevin]
- Require producers to pass in the line IRQ number.
v1: https://lore.kernel.org/all/20250404211449.1443336-1-sea...@google.com
[1] https://lore.kernel.org/all/20230801115646.33990-1-lik...@tencent.com
[2] https://lore.kernel.org/all/202504011
gt;eventsel_hw &=
> > ~ARCH_PERFMON_EVENTSEL_ENABLE;
> > + } else {
> > + int idx = pmc->idx - KVM_FIXED_PMC_BASE_IDX;
> > +
> > + if (allowed)
> > + pmu->fixed_ctr_ctrl_hw = pmu->fixed_
On Fri, May 16, 2025, Sean Christopherson wrote:
> On Mon, Mar 24, 2025, Mingwei Zhang wrote:
> > Reject PMU MSRs interception explicitly in
> > vmx_get_passthrough_msr_slot() since interception of PMU MSRs are
> > specially handled in intel_passthrough_pmu_msrs().
> >
On Mon, Mar 24, 2025, Mingwei Zhang wrote:
> From: Sandipan Das
>
> Add all PMU-related MSRs (including legacy K7 MSRs) to the list of
> possible direct access MSRs. Most of them will not be intercepted when
> using passthrough PMU.
>
> Signed-off-by: Sandipan Das
> Signed-off-by: Mingwei Zhan
On Mon, Mar 24, 2025, Mingwei Zhang wrote:
> Reject PMU MSRs interception explicitly in
> vmx_get_passthrough_msr_slot() since interception of PMU MSRs are
> specially handled in intel_passthrough_pmu_msrs().
>
> Signed-off-by: Mingwei Zhang
> Co-developed-by: Dapeng Mi
> Signed-off-by: Dapeng M
On Mon, Mar 24, 2025, Mingwei Zhang wrote:
> +static void amd_pmu_update_msr_intercepts(struct kvm_vcpu *vcpu)
> +{
> + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
> + struct vcpu_svm *svm = to_svm(vcpu);
> + int msr_clear = !!(kvm_mediated_pmu_enabled(vcpu));
> + int i;
> +
> + fo
This shortlog is unnecessarily confusing. It reads as if supported for running
L2 in a vCPU with a mediated PMU is somehow lacking.
On Mon, Mar 24, 2025, Mingwei Zhang wrote:
> Add nested virtualization support for mediated PMU by combining the MSR
> interception bitmaps of vmcs01 and vmcs12.
Do
On Mon, Mar 24, 2025, Mingwei Zhang wrote:
> + /*
> + * Clear hardware selector MSR content and its counter to avoid
> + * leakage and also avoid this guest GP counter get accidentally
> + * enabled during host running when host enable global ctrl.
> + */
> + for (i = 0;
On Mon, Mar 24, 2025, Mingwei Zhang wrote:
> static void kvm_pmu_incr_counter(struct kvm_pmc *pmc)
> {
> - pmc->emulated_counter++;
> - kvm_pmu_request_counter_reprogram(pmc);
> + struct kvm_vcpu *vcpu = pmc->vcpu;
> +
> + /*
> + * For perf-based PMUs, accumulate software-emu
On Thu, May 15, 2025, Kan Liang wrote:
> On 2025-05-14 7:19 p.m., Sean Christopherson wrote:
> >> This naming is confusing on purpose? Pick either guest/host and stick
> >> with it.
> >
> > +1. I also think the inner perf_host_{enter,exit}() helpers are superflou
On Thu, May 15, 2025, Dapeng Mi wrote:
> On 5/15/2025 8:37 AM, Sean Christopherson wrote:
> >> diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
> >> index 153972e944eb..eba086ef5eca 100644
> >> --- a/arch/x86/kvm/svm/pmu.c
> >> +++ b/arch/x8
On Thu, May 15, 2025, Dapeng Mi wrote:
> On 5/15/2025 8:41 AM, Sean Christopherson wrote:
> >> + if (kvm_mediated_pmu_enabled(vcpu) && kvm_pmu_has_perf_global_ctrl(pmu)
> >> &&
> > Just require the guest to have PERF_GLOBAL_CTRL, I don't see
On Mon, Mar 24, 2025, Mingwei Zhang wrote:
> diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h
> b/arch/x86/include/asm/kvm-x86-pmu-ops.h
> index 9159bf1a4730..35f27366c277 100644
> --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h
> +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h
> @@ -22,6 +22,8 @@ KV
andler
I ran out of time today and didn't get emails send for all patches. I'm
planning
on getting that done tomorrow.
I already have most of the proposed changes implemented:
https://github.com/sean-jc/linux.git x86/mediated_pmu
It compiles and doesn't explode, but it's not
of these guest not owned
> counters and it just needs simply to read/write from/to pmc->counter.
>
> Suggested-by: Sean Christopherson
> Signed-off-by: Dapeng Mi
> Co-developed-by: Mingwei Zhang
> Signed-off-by: Mingwei Zhang
> ---
> arch/x86/kvm/pmu.c | 27 +++
On Mon, Mar 24, 2025, Mingwei Zhang wrote:
> - pmu->fixed_ctr_ctrl = pmu->global_ctrl = pmu->global_status = 0;
> + pmu->fixed_ctr_ctrl = pmu->fixed_ctr_ctrl_hw = 0;
> + pmu->global_ctrl = pmu->global_status = 0;
VMCS needs to be updated.
Again, use more precise language. "Configure interceptions" is akin to "do
work".
It gives readers a vague idea of what's going on, but this
KVM: x86/pmu: Disable interception of select PMU MSRs for mediated vPMUs
is just as concise, and more descriptive.
> + /*
> + * In mediated vP
This is not an optimization in any sane interpretation of that word.
On Mon, Mar 24, 2025, Mingwei Zhang wrote:
> From: Dapeng Mi
>
> Currently pmu->global_ctrl is initialized in the common kvm_pmu_refresh()
> helper since both Intel and AMD CPUs set enable bits for all GP counters
> for PERF_GL
On Wed, Mar 26, 2025, Mingwei Zhang wrote:
> On Wed, Mar 26, 2025 at 9:51 AM Chen, Zide wrote:
> > > diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> > > index 6ad71752be4b..4e8cefcce7ab 100644
> > > --- a/arch/x86/kvm/pmu.c
> > > +++ b/arch/x86/kvm/pmu.c
> > > @@ -646,6 +646,30 @@ void kvm_
The shortlog is wildly inaccurate. KVM is not simply checking, KVM is actively
disabling RDPMC interception. *That* needs to be the focus of the shortlog and
changelog.
> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> index 92c742ead663..6ad71752be4b 100644
> --- a/arch/x86/kvm/pmu.c
> +
On Mon, Mar 24, 2025, Mingwei Zhang wrote:
> From: Dapeng Mi
>
> Add perf_capabilities in kvm_host_values{} structure to record host perf
> capabilities. KVM needs to know if host supports some PMU capabilities
> and then decide if passthrough or intercept some PMU MSRs or instruction
> like rdpm
On Mon, Mar 24, 2025, Mingwei Zhang wrote:
> From: Dapeng Mi
>
> Check user space's PMU cpuid configuration and filter the invalid
> configuration.
>
> Either legacy perf-based vPMU or mediated vPMU needs kernel to support
> local APIC, otherwise PMI has no way to be injected into guest. If
> ke
introduce a
> pmu_ops variable MIN_MEDIATED_PMU_VERSION to indicates the minimum host
> PMU version which mediated vPMU needs.
>
> Currently enable_mediated_pmu is not exposed to user space as a module
> parameter until all mediated vPMU code are in place.
>
> Suggested-by: Sean Christopherson
On Mon, Mar 24, 2025, Mingwei Zhang wrote:
> If a guest PMI is delivered after VM-exit, the KVM maskable interrupt will
> be held pending until EFLAGS.IF is set. In the meantime, if the logical
> processor receives an NMI for any reason at all, perf_event_nmi_handler()
> will be invoked. If there i
On Mon, Mar 24, 2025, Mingwei Zhang wrote:
> From: Kan Liang
>
> When entering/exiting a guest, some contexts for a guest have to be
> switched. For examples, there is a dedicated interrupt vector for
> guests on Intel platforms.
>
> When PMI switch into a new guest vector, guest_lvtpc value nee
On Mon, Mar 24, 2025, Mingwei Zhang wrote:
> diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
> index ad5c68f0509d..b0cb3220e1bb 100644
> --- a/arch/x86/include/asm/idtentry.h
> +++ b/arch/x86/include/asm/idtentry.h
> @@ -745,6 +745,7 @@ DECLARE_IDTENTRY_SYSVEC(IRQ_WOR
On Mon, Mar 24, 2025, Mingwei Zhang wrote:
> diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
> index 385e3a5fc304..18cd418fe106 100644
> --- a/arch/x86/kernel/irq.c
> +++ b/arch/x86/kernel/irq.c
> @@ -312,16 +312,22 @@ DEFINE_IDTENTRY_SYSVEC(sysvec_x86_platform_ipi)
> static void dummy_
On Fri, Apr 25, 2025, Peter Zijlstra wrote:
> On Mon, Mar 24, 2025 at 05:30:45PM +, Mingwei Zhang wrote:
>
> > @@ -6040,6 +6041,71 @@ void perf_put_mediated_pmu(void)
> > }
> > EXPORT_SYMBOL_GPL(perf_put_mediated_pmu);
> >
> > +static inline void perf_host_exit(struct perf_cpu_context *cpu
On Mon, Mar 24, 2025, Mingwei Zhang wrote:
> +{
> + if (event->attr.exclude_guest && __this_cpu_read(perf_in_guest)) {
My vote is for s/perf_in_guest/guest_ctx_loaded, because "perf in guest" doesn't
accurately describe just the mediated PMU case. E.g. perf itself is running in
KVM guests whe
On Mon, Mar 24, 2025, Mingwei Zhang wrote:
> +/*
> + * Currently invoked at VM creation to
> + * - Check whether there are existing !exclude_guest events of PMU with
> + * PERF_PMU_CAP_MEDIATED_VPMU
> + * - Set nr_mediated_pmu_vms to prevent !exclude_guest event creation on
> + * PMUs with PERF
On Wed, May 07, 2025, Shuah Khan wrote:
> The issues Peter is seeing regarding KHDR_INCLUDES in the following
> tests can be easily fixed by simply changing the test Makefile. These
> aren't framework related.
>
> kvm/Makefile.kvm:-I ../rseq -I.. $(EXTRA_CFLAGS) $(KHDR_INCLUDES)
...
> You ca
On Tue, May 06, 2025, Dapeng Mi wrote:
> Hi Sean,
>
> Not sure if you have bandwidth to review this mediated vPMU v4 patchset?
I'm getting there. I wanted to get through all the stuff I thought would likely
be ready for 6.16 as-is before moving onto the larger series.
On Mon, May 05, 2025, Pratik R. Sampat wrote:
> On 5/5/2025 6:15 PM, Sean Christopherson wrote:
> > On Mon, May 05, 2025, Pratik R. Sampat wrote:
> > Argh, now I remember the issue. But _sev_platform_init_locked() returns
> > '0' if
> > psp_init_on_probe i
On Mon, May 05, 2025, Ashish Kalra wrote:
> On 5/5/2025 6:15 PM, Sean Christopherson wrote:
> > @@ -3067,12 +3075,6 @@ void __init sev_hardware_setup(void)
> >
> > if (!sev_enabled)
> > return;
> > -
> > - /*
> > -
On Mon, May 05, 2025, Pratik R. Sampat wrote:
> Hi Sean,
>
> On 5/2/25 4:50 PM, Sean Christopherson wrote:
> > On Wed, 05 Mar 2025 16:59:50 -0600, Pratik R. Sampat wrote:
> >> This patch series extends the sev_init2 and the sev_smoke test to
> >> exerci
On Wed, 05 Mar 2025 16:59:50 -0600, Pratik R. Sampat wrote:
> This patch series extends the sev_init2 and the sev_smoke test to
> exercise the SEV-SNP VM launch workflow.
>
> Primarily, it introduces the architectural defines, its support in the
> SEV library and extends the tests to interact with
On Thu, May 01, 2025, Peter Zijlstra wrote:
> On Thu, May 01, 2025 at 01:42:35PM +0200, Peter Zijlstra wrote:
> > On Wed, Oct 16, 2024 at 07:14:34PM -0700, John Hubbard wrote:
> > > Peter Zijlstra's "NAK NAK NAK" response [1] last year was the most
> > > colorful, so I'll helpfully cite it here. :)
On Mon, Apr 21, 2025, Bibo Mao wrote:
> Add KVM selftests header files for LoongArch, including processor.h
> and kvm_util_base.h.
Nit, kvm_util_arch.h, not kvm_util_base.h. I only noticed because I still have
nightmares about kvm_util_base.h. :-)
On Fri, Apr 25, 2025, Dave Hansen wrote:
> On 4/25/25 14:04, Sean Christopherson wrote:
> > Userspace is going to be waiting on ->release() no matter what.
>
> Unless it isn't even involved and it happens automatically.
With my Google hat on: no thanks.
Customer: Hey
On Fri, Apr 25, 2025, Dave Hansen wrote:
> On 4/25/25 12:29, Sean Christopherson wrote:
> > --- a/arch/x86/kernel/cpu/sgx/virt.c
> > +++ b/arch/x86/kernel/cpu/sgx/virt.c
> > @@ -255,6 +255,7 @@ static int sgx_vepc_release(struct inode *inode, struct
> > file *fil
On Fri, Apr 25, 2025, Dave Hansen wrote:
> On 4/25/25 10:40, Sean Christopherson wrote:
> > So then why on earth is the kernel implementing automatic updates?
>
> Because it's literally the least amount of code
It's literally not.
This series:
4 files changed, 104 in
On Fri, Apr 25, 2025, Elena Reshetova wrote:
> > On Thu, Apr 24, 2025, Elena Reshetova wrote:
> > Userspace generally won't care about a 10us delay when destroying a
> > process, but a 10us delay to launch an enclave could be quite problematic,
> > e.g. in the TDX use case where enclaves may be lau
On Thu, Apr 24, 2025, Elena Reshetova wrote:
> > On Thu, Apr 24, 2025, Elena Reshetova wrote:
> > +void sgx_dec_usage_count(void)
> > +{
> > + if (atomic_dec_return(&sgx_usage_count))
> > + return;
> > +
> > + guard(mutex)(&sgx_svn_lock);
> > +
> > + if (atomic_read(&sgx_usage_count
On Thu, Apr 24, 2025, Elena Reshetova wrote:
> > On Tue, Apr 22, 2025, Kai Huang wrote:
> > > On Fri, 2025-04-18 at 07:55 -0700, Sean Christopherson wrote:
> > > > On Tue, Apr 15, 2025, Elena Reshetova wrote:
> > > > That said, handling this deep in
On Tue, Apr 22, 2025, Kai Huang wrote:
> On Fri, 2025-04-18 at 07:55 -0700, Sean Christopherson wrote:
> > On Tue, Apr 15, 2025, Elena Reshetova wrote:
> > That said, handling this deep in the bowels of EPC page allocation seems
> > unnecessary. The only way for there to be n
On Tue, Apr 15, 2025, Elena Reshetova wrote:
> +/* This lock is held to prevent new EPC pages from being created
> + * during the execution of ENCLS[EUPDATESVN].
> + */
> +static DEFINE_SPINLOCK(sgx_epc_eupdatesvn_lock);
> +
> static atomic_long_t sgx_nr_used_pages = ATOMIC_LONG_INIT(0);
> static
On Thu, Apr 17, 2025, Kai Huang wrote:
> I think the sgx_updatesvn() should just return true when EUPDATESVN returns 0
> or
> SGX_NO_UPDATE, and return false for all other error codes. And it should
> ENCLS_WARN() for all other error codes, except SGX_INSUFFICIENT_ENTROPY
> because
> it can stil
On Thu, Apr 10, 2025, Alex Williamson wrote:
> On Fri, 4 Apr 2025 14:14:45 -0700
> Sean Christopherson wrote:
> > diff --git a/include/linux/irqbypass.h b/include/linux/irqbypass.h
> > index 9bdb2a781841..379725b9a003 100644
> > --- a/include/linux/irqbypass.h
> > +
On Thu, Apr 10, 2025, Kevin Tian wrote:
> > From: Sean Christopherson
> > Sent: Saturday, April 5, 2025 5:15 AM
> >
> > Track IRQ bypass produsers and consumers using an xarray to avoid the
> > O(2n)
> > insertion time associated with walking a list to che
On Thu, Apr 10, 2025, Kevin Tian wrote:
> > From: Sean Christopherson
> > +int irq_bypass_register_consumer(struct irq_bypass_consumer *consumer,
> > +struct eventfd_ctx *eventfd)
> > {
> > struct irq_bypass_consumer *tmp;
>
ful*
behavior for environments that want/need irqbypass to always work. But
that's a future problem.
[1] https://lore.kernel.org/all/20230801115646.33990-1-lik...@tencent.com
[2] https://lore.kernel.org/all/20250401161804.842968-1-sea...@google.com
Sean Christopherson (7):
irqbypass: Drop
Use the paired consumer/producer information to disconnect IRQ bypass
producers/consumers in O(1) time (ignoring the cost of __disconnect()).
Signed-off-by: Sean Christopherson
---
virt/lib/irqbypass.c | 50 +++-
1 file changed, 8 insertions(+), 42
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217379
Link: https://lore.kernel.org/all/20230801115646.33990-1-lik...@tencent.com
Signed-off-by: Sean Christopherson
---
include/linux/irqbypass.h | 2 --
virt/lib/irqbypass.c | 68 +++
2 files changed, 34
Use guard(mutex) to clean up irqbypass's error handling.
Signed-off-by: Sean Christopherson
---
virt/lib/irqbypass.c | 38 ++
1 file changed, 10 insertions(+), 28 deletions(-)
diff --git a/virt/lib/irqbypass.c b/virt/lib/irqbypass.c
index 6d68a0f
Explicitly track IRQ bypass producer:consumer bindings. This will allow
making removal an O(1) operation; searching through the list to find
information that is trivially tracked (and useful for debug) is wasteful.
Signed-off-by: Sean Christopherson
---
include/linux/irqbypass.h | 5
eventfd_ctx pointer (for all intents and purposes) unnecessarily
obfuscates the code and makes it more brittle.
Signed-off-by: Sean Christopherson
---
drivers/vfio/pci/vfio_pci_intrs.c | 5 +
drivers/vhost/vdpa.c | 4 +---
include/linux/irqbypass.h | 31
Drop superfluous might_sleep() annotations from irqbypass, mutex_lock()
provides all of the necessary tracking.
Signed-off-by: Sean Christopherson
---
virt/lib/irqbypass.c | 8
1 file changed, 8 deletions(-)
diff --git a/virt/lib/irqbypass.c b/virt/lib/irqbypass.c
index 080c706f3b01
. E.g.
if try_module_get() fails because irqbypass.ko is being unloaded, then the
kernel has already hit a use-after-free by virtue of executing code whose
lifecycle is tied to irqbypass.ko.
Signed-off-by: Sean Christopherson
---
virt/lib/irqbypass.c | 20
1 file changed, 20 deletion
On Wed, Mar 26, 2025, James Houghton wrote:
> On Wed, Mar 26, 2025 at 11:41 AM Sean Christopherson
> wrote:
> > Then the auto resolving works as below, and as James pointed out, the assert
> > becomes
> >
> > TEST_ASSERT(!warn_only, );
>
&
On Tue, Mar 25, 2025, James Houghton wrote:
> On Mon, Mar 24, 2025 at 6:57 PM Maxim Levitsky wrote:
> >
> > Add an option to skip sanity check of number of still idle pages,
> > and set it by default to skip, in case hypervisor or NUMA balancing
> > is detected.
> >
> > Signed-off-by: Maxim Levits
On Thu, 27 Feb 2025 22:08:19 +, Colin Ian King wrote:
> There is a spelling mistake in a PER_PAGE_DEBUG debug message. Fix it.
Applied to kvm-x86 selftests, thanks!
[1/1] KVM: selftests: Fix spelling mistake "UFFDIO_CONINUE" -> "UFFDIO_CONTINUE"
https://github.com/kvm-x86/linux/commit/7
(ret)
> + return false;
> +
> + f = fopen("/proc/sys/kernel/numa_balancing", "r");
Pretty sure this needs to assert on f being valid.
> + ret = fscanf(f, "%d", &val);
file needs to be closed.
Actually, rather than fix these things, extract
On Tue, Feb 25, 2025, Keith Busch wrote:
> From: Keith Busch
>
> A VMM may send a signal to its threads while they've entered KVM_RUN. If
> that thread happens to be trying to make the huge page recovery vhost
> task, then it fails with -ERESTARTNOINTR. We need to retry if that
> happens, so we c
On Fri, Feb 14, 2025, Colin Ian King wrote:
> There is a spelling mistake in a TEST_FAIL message. Fix it.
Gah, as usual, your spell checker is superior to mine. Squashed the fix with
offending commit.
Thanks!
d a way to make the warning go away.
I think you can use a "nested" lock to avoid this. See e.g. commit
86a41ea9fd79 ("l2tp: fix lockdep splat") for an example.
--Sean
[1]
https://www.kernel.org/doc/html/latest/locking/lockdep-design.html#exception-nested-data-dependencies-leading-to-nested-locking
On Mon, Jan 20, 2025, Colton Lewis wrote:
> > > +static void test_core_counters(void)
> > > +{
> > > + uint8_t nr_counters = nr_core_counters();
> > > + bool core_ext = kvm_cpu_has(X86_FEATURE_PERF_CTR_EXT_CORE);
> > > + bool perfmon_v2 = kvm_cpu_has(X86_FEATURE_PERFMON_V2);
> > > + struct kvm_vcpu
On Thu, 19 Dec 2024 17:10:32 -0500, Maxim Levitsky wrote:
> Reverse the order in which
> the PML log is read to align more closely to the hardware. It should
> not affect regular users of the dirty logging but it fixes a unit test
> specific assumption in the dirty_log_test dirty-ring mode.
>
> Be
On Fri, 13 Dec 2024 14:30:00 -0800, Reinette Chatre wrote:
> Annotate the KVM selftests' _no_printf() with the printf format attribute
> so that the compiler can help check parameters provided to pr_debug() and
> pr_info() irrespective of DEBUG and QUIET being defined.
>
> [reinette: move attribut
On Tue, 26 Nov 2024 15:37:44 +0800, Chen Ni wrote:
> Remove unnecessary semicolons reported by Coccinelle/coccicheck and the
> semantic patch at scripts/coccinelle/misc/semicolon.cocci.
Applied to kvm-x86 selftests, thanks!
[1/1] KVM: selftests: Remove unneeded semicolon
https://github.com/
On Wed, 18 Sep 2024 20:53:13 +, Colton Lewis wrote:
> Extend pmu_counters_test to AMD CPUs.
>
> As the AMD PMU is quite different from Intel with different events and
> feature sets, this series introduces a new code path to test it,
> specifically focusing on the core counters including the
>
On Wed, Sep 18, 2024, Colton Lewis wrote:
> Test PerfMonV2, which defines global registers to enable multiple
> performance counters with a single MSR write, in its own function.
>
> If the feature is available, ensure the global control register has
> the ability to start and stop the performance
On Wed, Sep 18, 2024, Colton Lewis wrote:
> Test events on core counters by iterating through every combination of
> events in amd_pmu_zen_events with every core counter.
>
> For each combination, calculate the appropriate register addresses for
> the event selection/control register and the count
On Wed, Sep 18, 2024, Colton Lewis wrote:
> Run a basic test to ensure we can write an arbitrary value to the core
> counters and read it back.
>
> Signed-off-by: Colton Lewis
> ---
> .../selftests/kvm/x86_64/pmu_counters_test.c | 54 +++
> 1 file changed, 54 insertions(+)
>
>
On Wed, Sep 18, 2024, Colton Lewis wrote:
> Branch in main() depending on if the CPU is Intel or AMD. They are
> subject to vastly different requirements because the AMD PMU lacks
> many properties defined by the Intel PMU including the entire CPUID
> 0xa function where Intel stores all the PMU pro
mes (when the kernel also defines a feature), and adjust the property names to
follow suit.
If there are no objections, I'll apply this as:
--
Author: Colton Lewis
AuthorDate: Wed Sep 18 20:53:15 2024 +
Commit: Sean Christopherson
CommitDate: Wed Jan 8 09:55:57 2025 -0800
KVM
On Wed, 27 Nov 2024 17:33:27 -0800, Sean Christopherson wrote:
> The super short TL;DR: snapshot all X86_FEATURE_* flags that KVM cares
> about so that all queries against guest capabilities are "fast", e.g. don't
> require manual enabling or judgment calls as to where a fe
On Tue, 17 Dec 2024 18:14:51 +, Ivan Orlov wrote:
> Currently, the unhandleable vectoring (e.g. when guest accesses MMIO
> during vectoring) is handled differently on VMX and SVM: on VMX KVM
> returns internal error, when SVM goes into infinite loop trying to
> deliver an event again and again.
On Tue, Dec 17, 2024, Ivan Orlov wrote:
> Move unhandleable vmexit during vectoring error detection
> into check_emulate_instruction. Implement the function which prohibits
> the emulation if EMULTYPE_PF is set when vectoring, otherwise such a
> situation may occur:
I definitely think it's worth e
On Tue, Dec 17, 2024, Ivan Orlov wrote:
> Currently, the unhandleable vectoring (e.g. when guest accesses MMIO
> during vectoring) is handled differently on VMX and SVM: on VMX KVM
> returns internal error, when SVM goes into infinite loop trying to
> deliver an event again and again.
>
> This pat
KVM: selftests:
On Tue, Dec 17, 2024, Ivan Orlov wrote:
> Extend the 'set_memory_region_test' with a test case which covers the
> MMIO during vectoring error handling. The test case
Probably a good idea to explicitly state this is x86-only (hard to see that
from the diff alone).
>
> 1) Sets an
KVM: selftests: is the preferred scope.
On Tue, Dec 17, 2024, Ivan Orlov wrote:
> Detect unhandleable vectoring in check_emulate_instruction to prevent
> infinite loop on SVM and eliminate the difference in how intercepted #PF
> during vectoring is handled on SVM and VMX.
>
> Signed-off-by: Ivan Orlov
> ---
> V1 -> V2:
> - Detect the u
On Tue, Dec 17, 2024, Ivan Orlov wrote:
> Add emulation status for unhandleable vectoring, i.e. when KVM can't
> emulate an instruction during vectoring. Such a situation can occur
> if guest sets the IDT descriptor base to point to MMIO region, and
> triggers an exception after that.
>
> Exit to
On Fri, Dec 13, 2024, Chao Gao wrote:
> On Wed, Nov 27, 2024 at 05:34:17PM -0800, Sean Christopherson wrote:
> >diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> >index d3c3e1327ca1..8d088a888a0d 100644
> >--- a/arch/x86/kvm/cpuid.c
> >+++ b/arch/x86/kvm/
On Fri, Dec 13, 2024, Maxim Levitsky wrote:
> On Thu, 2024-12-12 at 22:19 -0800, Sean Christopherson wrote:
> > On Thu, Dec 12, 2024, Maxim Levitsky wrote:
> > > On Wed, 2024-12-11 at 16:44 -0800, Sean Christopherson wrote:
> > > > But, I can't help but wonder wh
On Fri, Dec 13, 2024, Ivan Orlov wrote:
> On Thu, Dec 12, 2024 at 11:42:37AM -0800, Sean Christopherson wrote:
> > Unprotect and re-execute is fine, what I'm worried about is *successfully*
> > emulating the instruction. E.g.
> >
> > 1. CPU executes instruction
On Thu, Dec 12, 2024, Maxim Levitsky wrote:
> On Wed, 2024-12-11 at 16:44 -0800, Sean Christopherson wrote:
> > But, I can't help but wonder why KVM bothers emulating PML. I can
> > appreciate
> > that avoiding exits to L1 would be beneficial, but what use case act
On Thu, Dec 12, 2024, Ivan Orlov wrote:
> On Wed, Dec 11, 2024 at 05:01:07PM -0800, Sean Christopherson wrote:
> > > Hm, by the way, what is the desired behaviour if EMULTYPE_ALLOW_RETRY_PF
> > > is
> > > set? Is it correct that we return an internal error if it is
On Wed, Dec 11, 2024, Ivan Orlov wrote:
> On 12/11/24 18:15, Sean Christopherson wrote:
> > Hmm, this should probably be "pf_mmio", not just "mmio". E.g. if KVM is
> > emulating
> > large swaths of guest code because unrestricted guest is disabled, t
On Wed, Dec 11, 2024, Maxim Levitsky wrote:
> X86 spec specifies that the CPU writes to the PML log 'backwards'
SDM, because this is Intel specific.
> or in other words, it first writes entry 511, then entry 510 and so on,
> until it writes entry 0, after which the 'PML log full' VM exit happens.
On Mon, Nov 11, 2024, Ivan Orlov wrote:
> Currently, the situation when guest accesses MMIO during vectoring is
> handled differently on VMX and SVM: on VMX KVM returns internal error,
> when SVM goes into infinite loop trying to deliver an event again and
> again.
>
> This patch series eliminates
On Mon, Nov 11, 2024, Ivan Orlov wrote:
> Extend the 'set_memory_region_test' with a test case which covers the
> MMIO during vectoring error handling. The test case
>
> 1) Sets an IDT descriptor base to point to an MMIO address
> 2) Generates a #GP in the guest
> 3) Verifies that we got a correct
1 - 100 of 2273 matches
Mail list logo