On 2021/4/20 16:30, Liuxiangdong wrote:
On 2021/4/15 11:20, Like Xu wrote:
The bit 12 represents "Processor Event Based Sampling Unavailable (RO)" :
1 = PEBS is not supported.
0 = PEBS is supported.
A write to this PEBS_UNAVL available bit will bring #GP(0) when guest PEBS
is enabled.
On 2021/4/19 16:11, Liuxiangdong wrote:
On 2021/4/15 11:20, Like Xu wrote:
When a guest counter is configured as a PEBS counter through
IA32_PEBS_ENABLE, a guest PEBS event will be reprogrammed by
configuring a non-zero precision level in the perf_event_attr.
The guest PEBS overflow PMI bit w
ndly PEBS" capabilityand
some PEBS records will be lost when used by guests.
Thanks!
On 2021/4/6 13:14, Xu, Like wrote:
Hi Xiangdong,
On 2021/4/6 11:24, Liuxiangdong (Aven, Cloud Infrastructure Service
Product Dept.) wrote:
Hi,like.
Some questions about this new pebs patches set:
http
Hi Paolo,
Do we have a chance to make Arch LBR into the mainline in the upcoming
merger window?
https://lore.kernel.org/kvm/20210314155225.206661-1-like...@linux.intel.com/
Thanks,
Like Xu
On 2021/2/8 18:31, Paolo Bonzini wrote:
Ok, this makes sense. I'll review the patches more carefully, l
On 2021/4/9 15:59, Peter Zijlstra wrote:
On Fri, Apr 09, 2021 at 03:07:38PM +0800, Xu, Like wrote:
Hi Peter,
On 2021/4/8 15:52, Peter Zijlstra wrote:
This is because in the early part of this function, we have operations:
if (x86_pmu.flags & PMU_FL_PEBS_ALL)
arr[0].g
Hi Peter,
On 2021/4/8 15:52, Peter Zijlstra wrote:
This is because in the early part of this function, we have operations:
if (x86_pmu.flags & PMU_FL_PEBS_ALL)
arr[0].guest &= ~cpuc->pebs_enabled;
else
arr[0].guest &= ~(cpuc->pebs_enabled & PEBS_COUNTER_MASK);
and i
On 2021/4/8 15:52, Peter Zijlstra wrote:
On Thu, Apr 08, 2021 at 01:39:49PM +0800, Xu, Like wrote:
Hi Peter,
Thanks for your detailed comments.
If you have more comments for other patches, please let me know.
On 2021/4/7 23:39, Peter Zijlstra wrote:
On Mon, Mar 29, 2021 at 01:41:29PM +0800
Hi Peter,
Thanks for your detailed comments.
If you have more comments for other patches, please let me know.
On 2021/4/7 23:39, Peter Zijlstra wrote:
On Mon, Mar 29, 2021 at 01:41:29PM +0800, Like Xu wrote:
@@ -3869,10 +3876,12 @@ static struct perf_guest_switch_msr
*intel_guest_get_msrs(in
On 2021/4/7 0:22, Peter Zijlstra wrote:
On Mon, Mar 29, 2021 at 01:41:23PM +0800, Like Xu wrote:
With PEBS virtualization, the guest PEBS records get delivered to the
guest DS, and the host pmi handler uses perf_guest_cbs->is_in_guest()
to distinguish whether the PMI comes from the guest code li
Hi Xiangdong,
On 2021/4/6 11:24, Liuxiangdong (Aven, Cloud Infrastructure Service Product
Dept.) wrote:
Hi,like.
Some questions about this new pebs patches set:
https://lore.kernel.org/kvm/20210329054137.120994-2-like...@linux.intel.com/
The new hardware facility supporting guest PEBS is only
Hi all, do we have any comments on this patch set?
On 2021/3/14 23:52, Like Xu wrote:
Hi geniuses,
Please help review the new version of Arch LBR enabling patch set.
The Architectural Last Branch Records (LBRs) is publiced
in the 319433-040 release of Intel Architecture Instruction
Set Extensi
Hi all, do we have any comments on this patch set?
On 2021/3/29 13:41, Like Xu wrote:
The guest Precise Event Based Sampling (PEBS) feature can provide
an architectural state of the instruction executed after the guest
instruction that exactly caused the event. It needs new hardware
facility onl
Hi, do we have any comments on this patch set?
On 2021/3/14 23:52, Like Xu wrote:
Hi geniuses,
Please help review the new version of Arch LBR enabling patch set.
The Architectural Last Branch Records (LBRs) is publiced
in the 319433-040 release of Intel Architecture Instruction
Set Extensions
On 2021/3/8 16:53, Peter Zijlstra wrote:
Still, it calling atomic_switch_perf_msrs() and
intel_pmu_lbr_is_enabled() when there isn't a PMU at all is of course, a
complete waste of cycles.
This suggestion is reminiscent of a sad regression of optimizing it:
https://lore.kernel.org/kvm/202006190
On 2021/3/6 6:33, Sean Christopherson wrote:
Handle a NULL x86_pmu.guest_get_msrs at invocation instead of patching
in perf_guest_get_msrs_nop() during setup. If there is no PMU, setup
"If there is no PMU" ...
How to set up this kind of environment,
and what changes are needed in .config or b
On 2021/3/5 1:23, Sean Christopherson wrote:
On Thu, Mar 04, 2021, Xu, Like wrote:
On 2021/3/4 1:26, Sean Christopherson wrote:
On Wed, Mar 03, 2021, Like Xu wrote:
New VMX controls bits for Arch LBR are added. When bit 21 in vmentry_ctrl
is set, VM entry will write the value from the "
On 2021/3/5 0:31, Sean Christopherson wrote:
Paolo, any thoughts on how to keep supported_xss aligned with support_xcr0,
without spreading the logic around too much?
From 58be4152ced441395dfc439f446c5ad53bd48576 Mon Sep 17 00:00:00 2001
From: Like Xu
Date: Thu, 4 Mar 2021 13:21:38 +0800
Subjec
On 2021/3/5 0:12, Sean Christopherson wrote:
On Thu, Mar 04, 2021, Xu, Like wrote:
Hi Sean,
Thanks for your detailed review on the patch set.
On 2021/3/4 0:58, Sean Christopherson wrote:
On Wed, Mar 03, 2021, Like Xu wrote:
@@ -348,10 +352,26 @@ static bool intel_pmu_handle_lbr_msrs_access
On 2021/3/4 1:26, Sean Christopherson wrote:
On Wed, Mar 03, 2021, Like Xu wrote:
New VMX controls bits for Arch LBR are added. When bit 21 in vmentry_ctrl
is set, VM entry will write the value from the "Guest IA32_LBR_CTL" guest
state field to IA32_LBR_CTL. When bit 26 in vmexit_ctrl is set, VM
On 2021/3/4 1:19, Sean Christopherson wrote:
On Wed, Mar 03, 2021, Like Xu wrote:
diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index 25d620685ae7..d14a14eb712d 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -19,6 +19,7 @@
#include "p
Hi Sean,
Thanks for your detailed review on the patch set.
On 2021/3/4 0:58, Sean Christopherson wrote:
On Wed, Mar 03, 2021, Like Xu wrote:
@@ -348,10 +352,26 @@ static bool intel_pmu_handle_lbr_msrs_access(struct
kvm_vcpu *vcpu,
return true;
}
+/*
+ * Check if the requested de
On 2021/2/24 1:15, Sean Christopherson wrote:
On Tue, Feb 23, 2021, Like Xu wrote:
If lbr_desc->event is successfully created, the intel_pmu_create_
guest_lbr_event() will return 0, otherwise it will return -ENOENT,
and then jump to LBR msrs dummy handling.
Fixes: 1b5ac3226a1a ("KVM: vmx/pmu: P
On 2021/2/5 19:00, Paolo Bonzini wrote:
On 05/02/21 09:16, Xu, Like wrote:
Hi Paolo,
I am wondering if it is acceptable for you to
review the minor Architecture LBR patch set without XSAVES for v5.12 ?
As far as I know, the guest Arch LBR can still work without XSAVES
support.
I dopn
Hi Paolo,
I am wondering if it is acceptable for you to
review the minor Architecture LBR patch set without XSAVES for v5.12 ?
As far as I know, the guest Arch LBR can still work without XSAVES support.
---
thx,likexu
On 2021/2/4 8:59, Xu, Like wrote:
On 2021/2/3 22:37, Paolo Bonzini wrote
On 2021/2/3 22:37, Paolo Bonzini wrote:
On 03/02/21 14:57, Like Xu wrote:
If CPUID.(EAX=07H, ECX=0):EDX[19] is exposed to 1, the KVM supports Arch
LBRs and CPUID leaf 01CH indicates details of the Arch LBRs capabilities.
As the first step, KVM only exposes the current LBR depth on the host for
g
On 2021/1/25 19:47, Peter Zijlstra wrote:
On Mon, Jan 25, 2021 at 04:26:22PM +0800, Like Xu wrote:
In the host and guest PEBS both enabled case,
we'll get a crazy dmesg *bombing* about spurious PMI warning
if we pass the host PEBS PMI "harmlessly" to the guest:
[11261.502536] Uhhuh. NMI receiv
On 2021/1/29 10:52, Liuxiangdong (Aven, Cloud Infrastructure Service
Product Dept.) wrote:
On 2021/1/26 15:08, Xu, Like wrote:
On 2021/1/25 22:47, Liuxiangdong (Aven, Cloud Infrastructure Service
Product Dept.) wrote:
Thanks for replying,
On 2021/1/25 10:41, Like Xu wrote:
+ k
On 2021/1/26 17:30, Paolo Bonzini wrote:
On 08/01/21 02:37, Like Xu wrote:
Userspace could enable guest LBR feature when the exactly supported
LBR format value is initialized to the MSR_IA32_PERF_CAPABILITIES
and the LBR is also compatible with vPMU version and host cpu model.
Signed-off-by: Li
On 2021/1/25 22:47, Liuxiangdong (Aven, Cloud Infrastructure Service
Product Dept.) wrote:
Thanks for replying,
On 2021/1/25 10:41, Like Xu wrote:
+ k...@vger.kernel.org
Hi Liuxiangdong,
On 2021/1/22 18:02, Liuxiangdong (Aven, Cloud Infrastructure Service
Product Dept.) wrote:
Hi Like,
Som
On 2021/1/26 17:51, Paolo Bonzini wrote:
On 11/11/20 03:42, Xu, Like wrote:
Hi Peter,
On 2020/11/11 4:52, Stephane Eranian wrote:
On Tue, Nov 10, 2020 at 7:37 AM Peter Zijlstra
wrote:
On Tue, Nov 10, 2020 at 04:12:57PM +0100, Peter Zijlstra wrote:
On Mon, Nov 09, 2020 at 10:12:37AM +0800
On 2021/1/25 20:18, Peter Zijlstra wrote:
On Mon, Jan 25, 2021 at 08:07:06PM +0800, Xu, Like wrote:
So under the premise that counter cross-mapping is allowed,
how can hypercall help fix it ?
Hypercall or otherwise exposing the mapping, will let the guest fix it
up when it already touches the
On 2021/1/25 19:13, Peter Zijlstra wrote:
On Mon, Jan 25, 2021 at 04:08:22PM +0800, Like Xu wrote:
Hi Peter,
On 2021/1/22 17:56, Peter Zijlstra wrote:
On Fri, Jan 15, 2021 at 10:51:38AM -0800, Sean Christopherson wrote:
On Fri, Jan 15, 2021, Andi Kleen wrote:
I'm asking about ucode/hardare.
On 2021/1/16 1:30, Sean Christopherson wrote:
On Fri, Jan 15, 2021, Like Xu wrote:
Ping ?
On 2020/12/30 16:19, Like Xu wrote:
The HW_REF_CPU_CYCLES event on the fixed counter 2 is pseudo-encoded as
0x0300 in the intel_perfmon_event_map[]. Correct its usage.
Fixes: 62079d8a4312 ("KVM: PMU: add
On 2021/1/15 22:46, Peter Zijlstra wrote:
On Mon, Jan 04, 2021 at 09:15:31PM +0800, Like Xu wrote:
+ if (cpuc->pebs_enabled & ~cpuc->intel_ctrl_host_mask) {
+ arr[1].msr = MSR_IA32_PEBS_ENABLE;
+ arr[1].host = cpuc->pebs_enabled & ~cpuc->intel_ctrl_guest_mask;
On 2021/1/15 22:44, Peter Zijlstra wrote:
On Fri, Jan 15, 2021 at 10:30:13PM +0800, Xu, Like wrote:
Are you sure? Spurious NMI/PMIs are known to happen anyway. We have far
too much code to deal with them.
https://lore.kernel.org/lkml/20170628130748.GI5981@leverpostej/T/
In the rr workload
On 2021/1/15 20:01, Peter Zijlstra wrote:
On Thu, Jan 14, 2021 at 11:39:00AM +0800, Xu, Like wrote:
Why do we need to? Can't we simply always forward the PMI if the guest
has bits set in MSR_IA32_PEBS_ENABLE ? Surely we can access the guest
MSRs at a reasonable rate..
Sure, it'l
On 2021/1/15 19:33, Peter Zijlstra wrote:
On Mon, Jan 04, 2021 at 09:15:30PM +0800, Like Xu wrote:
When a guest counter is configured as a PEBS counter through
IA32_PEBS_ENABLE, a guest PEBS event will be reprogrammed by
configuring a non-zero precision level in the perf_event_attr.
The guest P
Hi Alex,
Thank you for trying this guest feature on multiple Intel platforms!
If you have more specific comments or any concerns, just let me know.
---
thx, likexu
On 2021/1/15 16:19, Alex Shi wrote:
在 2021/1/8 上午9:36, Like Xu 写道:
Because saving/restoring tens of LBR MSRs (e.g. 32 LBR stack
On 2021/1/15 2:55, Sean Christopherson wrote:
On Mon, Jan 04, 2021, Like Xu wrote:
---
arch/x86/events/intel/ds.c | 62 ++
1 file changed, 62 insertions(+)
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index b47cc4226934..c499bdb5837
Hi Sean,
Thanks for your comments !
On 2021/1/15 3:10, Sean Christopherson wrote:
On Mon, Jan 04, 2021, Like Xu wrote:
2) Slow path (part 3, patch 0012-0017)
This is when the host assigned physical PMC has a different index
from the virtual PMC (e.g. using physical PMC1 to emulate virtual PMC
On 2021/1/14 2:22, Peter Zijlstra wrote:
On Mon, Jan 04, 2021 at 09:15:29PM +0800, Like Xu wrote:
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index b47cc4226934..c499bdb58373 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -1721,6 +1721,65 @@
On 2021/1/14 2:06, Peter Zijlstra wrote:
On Mon, Jan 04, 2021 at 09:15:28PM +0800, Like Xu wrote:
@@ -327,6 +328,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
pmu->counter_bitmask[KVM_PMC_FIXED] = 0;
pmu->version = 0;
pmu->reserved_bits = 0x0020ull
Hi Sean,
On 2021/1/6 5:16, Sean Christopherson wrote:
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 6453b8a6834a..ccddda455bec 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3690,6 +3690,7 @@ static struct perf_guest_switch_msr
Hi Sean,
On 2021/1/6 5:11, Sean Christopherson wrote:
On Mon, Jan 04, 2021, Like Xu wrote:
If IA32_PERF_CAPABILITIES.PEBS_BASELINE [bit 14] is set, the
IA32_PEBS_ENABLE MSR exists and all architecturally enumerated fixed
and general purpose counters have corresponding bits in IA32_PEBS_ENABLE
t
Hi Peter,
On 2020/11/30 18:49, Peter Zijlstra wrote:
On Fri, Nov 27, 2020 at 10:14:49AM +0800, Xu, Like wrote:
OK, but the code here wanted to inspect the guest DS from the host. It
states this is somehow complicated/expensive. But surely we can at the
very least map the first guest DS page
Hi Peter,
On 2020/11/19 2:07, Peter Zijlstra wrote:
On Thu, Nov 19, 2020 at 12:15:09AM +0800, Like Xu wrote:
ISTR there was lots of fail trying to virtualize it earlier. What's
changed? There's 0 clues here.
Ah, now we have EPT-friendly PEBS facilities supported since Ice Lake
which makes gue
On 2020/11/19 2:07, Peter Zijlstra wrote:
On Thu, Nov 19, 2020 at 12:15:09AM +0800, Like Xu wrote:
ISTR there was lots of fail trying to virtualize it earlier. What's
changed? There's 0 clues here.
Ah, now we have EPT-friendly PEBS facilities supported since Ice Lake
which makes guest PEBS fea
Hi Peter,
On 2020/11/11 4:52, Stephane Eranian wrote:
On Tue, Nov 10, 2020 at 7:37 AM Peter Zijlstra wrote:
On Tue, Nov 10, 2020 at 04:12:57PM +0100, Peter Zijlstra wrote:
On Mon, Nov 09, 2020 at 10:12:37AM +0800, Like Xu wrote:
The Precise Event Based Sampling(PEBS) supported on Intel Ice L
Hi Paolo,
As you may know, we have got host perf support in Linus' tree
which provides a clear path for enabling guest LBR,
will we merge the remaining LBR KVM patch set?
---
[PATCH RESEND v13 00/10] Guest Last Branch Recording Enabling
https://lore.kernel.org/kvm/20201030035220.102403-1-like.
Are there volunteers or maintainer to help review this patch-set ?
Just a kindly ping.
Please let me know if you need a re-based version.
Thanks,
Like Xu
On 2020/8/14 16:48, Xu, Like wrote:
Are there no interested reviewers or users?
Just a kindly ping.
On 2020/7/26 23:32, Like Xu wrote
Hi Sean,
On 2020/9/29 11:13, Sean Christopherson wrote:
On Sun, Jul 26, 2020 at 11:32:21PM +0800, Like Xu wrote:
It's reasonable to call vmx_set_intercept_for_msr() in other vmx-specific
files (e.g. pmu_intel.c), so expose it without semantic changes hopefully.
I suppose it's reasonable, but y
Hi Eduardo,
On 2020/9/28 23:41, Eduardo Habkost wrote:
On Mon, Sep 28, 2020 at 10:51:03PM +0800, Xu, Like wrote:
Hi Eduardo,
Thanks for your detailed review.
On 2020/9/25 6:05, Eduardo Habkost wrote:
I've just noticed this on my review queue (apologies for the long
delay). Comments
Hi Paolo,
Do you have time or plan to review this patch series in this kernel cycle
since we have merged perf patches in the upstream to make it happen ?
Thanks,
Like Xu
On 2020/8/14 16:48, Xu, Like wrote:
Are there no interested reviewers or users?
Just a kindly ping.
On 2020/7/26 23:32
Are there no interested reviewers or users?
Just a kindly ping.
On 2020/7/26 23:32, Like Xu wrote:
Hi Paolo,
Please review this new version for the Kernel 5.9 release, and
Sean may not review them as he said in the previous email
https://lore.kernel.org/kvm/20200710162819.gf1...@linux.intel.co
On 2020/8/12 21:04, Paolo Bonzini wrote:
On 12/08/20 14:56, Xu, Like wrote:
My proposal is to define:
the "hypervisor privilege levels" events in the KVM/x86 context as
all the host kernel events plus /dev/kvm user space events.
What are "/dev/kvm user space events"? In
On 2020/8/12 19:32, Paolo Bonzini wrote:
On 12/08/20 13:11, pet...@infradead.org wrote:
x86 does not have a hypervisor privilege level, so it never uses
Arguably it does when Xen, but I don't think we support that, so *phew*.
Yeah, I suppose you could imagine having paravirtualized perf counte
On 2020/7/8 21:36, Andi Kleen wrote:
+ /*
+* As a first step, a guest could only enable LBR feature if its cpu
+* model is the same as the host because the LBR registers would
+* be pass-through to the guest and they're model specific.
+*/
+ if (boot_cp
On 2020/7/8 19:09, Paolo Bonzini wrote:
On 08/07/20 09:51, Xu, Like wrote:
Kindly ping.
I think we may need this patch, as we limit the maximum vPMU version to 2:
eax.split.version_id = min(cap.version, 2);
I don't think this is a problem. Are you planning to add support for
the f
Kindly ping.
I think we may need this patch, as we limit the maximum vPMU version to 2:
eax.split.version_id = min(cap.version, 2);
Thanks,
Like Xu
On 2020/6/24 9:59, Like Xu wrote:
Some new Intel platforms (such as TGL) already have the
fourth fixed counter TOPDOWN.SLOTS, but it has not b
Hi Sean,
First of all, are you going to queue the LBR patch series in your tree
considering the host perf patches have already queued in Peter's tree ?
On 2020/7/8 4:21, Sean Christopherson wrote:
On Sat, Jun 13, 2020 at 05:42:50PM +0800, Xu, Like wrote:
On 2020/6/13 17:14, Xiaoyao Li
On 2020/7/3 15:56, Peter Zijlstra wrote:
On Thu, Jul 02, 2020 at 03:58:42PM +0200, Peter Zijlstra wrote:
On Thu, Jul 02, 2020 at 09:11:06AM -0400, Liang, Kan wrote:
On 7/2/2020 3:40 AM, Peter Zijlstra wrote:
On Sat, Jun 13, 2020 at 04:09:45PM +0800, Like Xu wrote:
Like Xu (10):
perf/x86/c
On 2020/6/19 17:40, Vitaly Kuznetsov wrote:
Guest crashes are observed on a Cascade Lake system when 'perf top' is
launched on the host, e.g.
Interesting, is it specific to Cascade Lake?
Would you mind sharing the output of
"cpuid -r -l 1 -1" and "cat /proc/cpuinfo| grep microcode | uniq" with
On 2020/6/13 17:14, Xiaoyao Li wrote:
On 6/13/2020 4:09 PM, Like Xu wrote:
When the LBR feature is reported by the vmx_get_perf_capabilities(),
the LBR fields in the [vmx|vcpu]_supported debugctl should be unmasked.
The debugctl msr is handled separately in vmx/svm and they're not
completely id
Hi RongQing,
On 2020/6/8 17:34, Li RongQing wrote:
Guest kernel reports a fixed cpu frequency in /proc/cpuinfo,
this is confused to user when turbo is enable, and aperf/mperf
can be used to show current cpu frequency after 7d5905dc14a
"(x86 / CPU: Always show current CPU frequency in /proc/cpuin
Hi RongQing,
On 2020/6/5 9:44, Li RongQing wrote:
Guest kernel reports a fixed cpu frequency in /proc/cpuinfo,
this is confused to user when turbo is enable, and aperf/mperf
can be used to show current cpu frequency after 7d5905dc14a
"(x86 / CPU: Always show current CPU frequency in /proc/cpuinf
On 2020/6/4 4:33, Sean Christopherson wrote:
Unconditionally return true when querying the validity of
MSR_IA32_PERF_CAPABILITIES so as to defer the validity check to
intel_pmu_{get,set}_msr(), which can properly give the MSR a pass when
the access is initiated from host userspace.
Regardless of
On 2020/5/29 16:47, Paolo Bonzini wrote:
On 29/05/20 09:43, Like Xu wrote:
Hi Paolo,
As you said, you will queue the v3 of KVM patch, but it looks like we
are missing that part at the top of the kvm/queue tree.
For your convenience, let me resend v4 so that we can upstream this
feature in the
Hi Paolo,
On 2020/5/14 16:30, Like Xu wrote:
Hi Peter,
Would you mind acking the host perf patches if it looks good to you ?
Hi Paolo,
Please help review the KVM proposal changes in this stable version.
Now, we can use upstream QEMU w/ '-cpu host' to test this feature, and
disable it by cleari
On 2020/5/19 22:57, Peter Zijlstra wrote:
On Tue, May 19, 2020 at 09:10:58PM +0800, Xu, Like wrote:
On 2020/5/19 19:15, Peter Zijlstra wrote:
On Thu, May 14, 2020 at 04:30:53PM +0800, Like Xu wrote:
diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index ea4faae56473
Hi Peter,
On 2020/5/19 18:45, Peter Zijlstra wrote:
On Tue, May 19, 2020 at 11:08:41AM +0800, Like Xu wrote:
Sure, I could reuse cpuc->intel_ctrl_guest_mask to rewrite this part:
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index d788edb7c1f9..f1243e8211ca 100644
-
On 2020/5/19 19:15, Peter Zijlstra wrote:
On Thu, May 14, 2020 at 04:30:53PM +0800, Like Xu wrote:
diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index ea4faae56473..db185dca903d 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -646,6 +64
On 2020/5/19 19:03, Peter Zijlstra wrote:
On Thu, May 14, 2020 at 04:30:51PM +0800, Like Xu wrote:
@@ -6698,6 +6698,7 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
if (vcpu_to_pmu(vcpu)->version)
atomic_switch_perf_msrs(vmx);
+
atomic_switch_umwait_contr
On 2020/5/19 19:01, Peter Zijlstra wrote:
On Thu, May 14, 2020 at 04:30:51PM +0800, Like Xu wrote:
+ struct perf_event_attr attr = {
+ .type = PERF_TYPE_RAW,
+ .size = sizeof(attr),
+ .pinned = true,
+ .exclude_host = true,
+
On 2020/5/19 19:00, Peter Zijlstra wrote:
On Thu, May 14, 2020 at 04:30:51PM +0800, Like Xu wrote:
+static inline bool event_is_oncpu(struct perf_event *event)
+{
+ return event && event->oncpu != -1;
+}
+/*
+ * It's safe to access LBR msrs from guest when they have not
+ * been passthr
On 2020/5/19 18:53, Peter Zijlstra wrote:
On Thu, May 14, 2020 at 04:30:50PM +0800, Like Xu wrote:
@@ -203,6 +206,12 @@ static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, struct
msr_data *msr_info)
case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
msr_info->data = pmu->global_ovf_ctr
On 2020/5/8 21:09, Peter Zijlstra wrote:
On Mon, Apr 27, 2020 at 11:16:40AM +0800, Like Xu wrote:
On 2020/4/24 20:16, Peter Zijlstra wrote:
And I suppose that is why you need that horrible:
needs_guest_lbr_without_counter() thing to begin with.
Do you suggest to use event->attr.config check to
Hi Paolo,
Thanks for your detailed comments.
On 2020/5/7 15:57, Paolo Bonzini wrote:
On 07/05/20 04:14, Like Xu wrote:
+static inline u64 vmx_get_perf_capabilities(void)
+{
+ u64 perf_cap = 0;
+
+ if (boot_cpu_has(X86_FEATURE_PDCM))
+ rdmsrl(MSR_IA32_PERF_CAPABILITIES
Hi Paolo,
Thanks for your comments!
On 2020/5/5 0:57, Paolo Bonzini wrote:
On 27/04/20 09:19, Like Xu wrote:
+ if (vmx_supported_perf_capabilities())
+ kvm_cpu_cap_check_and_set(X86_FEATURE_PDCM);
I think we can always set it, worst case it will be zero.
Sure, we could s
78 matches
Mail list logo