Hi Krzysztof,
On 25.04.2018 05:32, Krzysztof Kozlowski wrote:
The Exynos5440 is not actively developed, there are no development
boards available and probably there are no real products with it.
Remove wide-tree support for Exynos5440.
Signed-off-by: Krzysztof Kozlowski
Reviewed-by: Andi
: Andi Shyti
Thanks,
Andi
-by: Andi Shyti
Thanks,
Andi
On 12:43 19.05.20, Ricardo Neri wrote:
> > > Running the same executable on the exact same kernel (and userland) but
> > > on a Intel i7-8565U doesn't crash at this point. I am guessing the
> > > emulation is supposed to do something different on AMD CPUs?
>
> I am surprised you don't see it on th
On 11:56 19.05.20, Brendan Shanks wrote:
> The problem is that the kernel does not emulate/spoof the SLDT instruction,
> only SGDT, SIDT, and SMSW.
> SLDT and STR weren't thought to be commonly used, so emulation/spoofing
> wasn’t added.
> In the last few months I have seen reports of one or two
Hi Krzysztof,
On 18.04.2018 00:32, Krzysztof Kozlowski wrote:
On Fri, Jan 26, 2018 at 03:04:54PM +0900, Andi Shyti wrote:
The mms114 binding [1] specifies that the 'x' and 'y' should be
called respectively 'touchscreen-size-x' and 'touchscreen-size-y
On 10:54 09.06.20, Brendan Shanks wrote:
> Add emulation/spoofing of SLDT and STR for both 32- and 64-bit
> processes.
>
> Wine users have found a small number of Windows apps using SLDT that
> were crashing when run on UMIP-enabled systems.
>
> Reported-by: Andreas Rammhold
> Originally-by: Ric
On 11:56 19.05.20, Brendan Shanks wrote:
> The problem is that the kernel does not emulate/spoof the SLDT instruction,
> only SGDT, SIDT, and SMSW.
> SLDT and STR weren't thought to be commonly used, so emulation/spoofing
> wasn’t added.
> In the last few months I have seen reports of one or two
terface of soft-offlining is open for userspace, so this bug can
> lead to DoS attack and should be fixed immedately.
The interface is root only and root can do everything anyways, so it's
not really a security issue.
-Andi
--
a...@linux.intel.com -- Speaking for myself only
--
To unsubscribe f
ERF_FORMAT_WEIGHT = 1U << 4,
>
> what's PERF_FORMAT_WEIGHT for?
It was used in a earlier iteration, but is now obsolete. I'll remove it.
-Andi
--
a...@linux.intel.com -- Speaking for myself only
--
To unsubscribe from this list: send the line "unsubscri
s not a bug.
If you're out of memory in user space the only thing you can do is to
exit.
>
> > + if (olen == 0)
> > + **s = 0;
> > + }
> > + strcat(*s, a);
> > +}
>
> Could this one be moved to util/string.c in some generic form?
's not implemented in hw yet, but in general it's allowed.
Fixed.
-Andi
--
a...@linux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info
on the same CPU as you get, right?
In this case I would rather use RCU. It's clearly unusable for anything
blocking (or not get_cpu) Normally RCU already handles the "ref count for short
non
blocking case"
Is that really true for AIO? It seems dubious.
-Andi
--
a...@linux.intel
On Thu, Nov 29, 2012 at 10:57:20AM -0800, Kent Overstreet wrote:
> On Thu, Nov 29, 2012 at 10:45:04AM -0800, Andi Kleen wrote:
> > Kent Overstreet writes:
> >
> > > This implements a refcount with similar semantics to
> > > atomic_get()/atomic_dec_and_test(), th
Dave Chinner writes:
>
> Comments, thoughts and flames all welcome.
Doing the reclaim per CPU sounds like a big change in the VM balance.
Doesn't this invalidate some zone reclaim mode settings?
How did you validate all this?
-Andi
--
a...@linux.intel.com -- Speaking for myself
er_nmi_handler() (patch
> 9c48f1c629ecfa114850c03f875c6691003214de), which doesn't call
> vmalloc_sync_all(). Is it ok to skip vmalloc_sync_all() in this path?
Yes it's ok for this case. vmalloc_sync_all is only needed when the
notifier is in freshly loaded module code.
-Andi
--
a...
imited.
But you don't have any limit on getting out-of-sync.
You could make it 64bit, but then wraps could happen.
-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http:/
behaviour for
people using the NUMA affinity APIs explicitely. I don't think that's a
good idea, if someone set the affinity explicitely the kernel better
follow that.
If you want to change behaviour for non DEFAULT like this
please use a new policy type.
-Andi
--
a...@linux.intel.com
o assume registers do not get changed
between assembler statements or assembler statements do not get
reordered. Better always put such values into explicit variables or
merge them into a single asm statement.
asm volatile is also not enough to prevent reordering. If anything
you would ne
en 3.6.2 and 3.6.6. These are merely the kernels were I have seen the
> problem; it may well affect other kernels.
A common cause of this would be running out of memory.
While this should eventually resolve itself it may take a long time
and the system may appear frozen.
I would rerun with an
ht, we
> should be using size_t for anything userspace can manipulate.
The regular atomic_t is limited in ways that you are not.
See my original mail.
-Andi
--
a...@linux.intel.com -- Speaking for myself only
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
ssymetric CPU case with your ref count no such limiter
exists.
-Andi
--
a...@linux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kern
ed for the MMX/SSE
> implementations without any problems for 9 years now.
It's still wrong.
Lying to the compiler usually bites you at some point.
-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More major
for details.
v3: now even more bite-sized. Qualifier constraints merged earlier.
Add some Reviewed-bys.
v4: Rename some variables, add some comments and other minor changes in the
first patch.
-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the bo
From: Andi Kleen
Add basic PEBS support for Haswell.
The constraints are similar to SandyBridge with a few new events.
v2: Readd missing pebs_aliases
v3: Readd missing hunk. Fix some constraints.
v4: Fix typo in PEBS event table (Stephane Eranian)
Reviewed-by: Stephane Eranian
Signed-off-by
From: Andi Kleen
Recent Intel CPUs have a new alternative MSR range for perfctrs that allows
writing the full counter width. Enable this range if the hardware reports it
using a new capability bit. This lowers overhead of perf stat slightly because
it has to do less interrupts to accumulate the
From: Andi Kleen
Add support for the v2 PEBS format. It has a superset of the v1 PEBS
fields, but has a longer record so we need to adjust the code paths.
The main advantage is the new "EventingRip" support which directly
gives the instruction, not off-by-one instruction. So with pr
From: Andi Kleen
Add basic Haswell PMU support.
Similar to SandyBridge, but has a few new events and two
new counter bits.
There are some new counter flags that need to be prevented
from being set on fixed counters, and allowed to be set
for generic counters.
Also we add support for the
From: Andi Kleen
This avoids some problems with spurious PMIs on Haswell.
Haswell seems to behave more like P4 in this regard. Do
the same thing as the P4 perf handler by unmasking
the NMI only at the end. Shouldn't make any difference
for earlier non P4 cores.
Signed-off-by: Andi
es? Were they already full-width? The SDM does not explain what
> happens to them with this extension. Could you clarify?
I tested and they always support the full reported width.
-Andi
--
a...@linux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the
v2: Print the feature at boot
Signed-off-by: Andi Kleen
diff --git a/arch/x86/include/uapi/asm/msr-index.h
b/arch/x86/include/uapi/asm/msr-index.h
index 433a59f..af41a77 100644
--- a/arch/x86/include/uapi/asm/msr-index.h
+++ b/arch/x86/include/uapi/asm/msr-index.h
@@ -163,6 +163,9 @
Hi,
> TC358765 is DSI-to-LVDS transmitter from Toshiba, used in
> OMAP44XX Blaze Tablet and Blaze Tablet2 boards.
I had a really fast look and I have few comments
> +static int tc358765_read_register(struct omap_dss_device *dssdev,
> + u16 reg, u32 *val)
> +{
, add some comments and other minor changes.
v5: Address some minor review feedback. Port to latest perf/core
Add some Reviewed/Tested-bys.
-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordom
From: Andi Kleen
This avoids some problems with spurious PMIs on Haswell.
Haswell seems to behave more like P4 in this regard. Do
the same thing as the P4 perf handler by unmasking
the NMI only at the end. Shouldn't make any difference
for earlier non P4 cores.
Signed-off-by: Andi
From: Andi Kleen
Add support for the v2 PEBS format. It has a superset of the v1 PEBS
fields, but has a longer record so we need to adjust the code paths.
The main advantage is the new "EventingRip" support which directly
gives the instruction, not off-by-one instruction. So with pr
From: Andi Kleen
Add basic PEBS support for Haswell.
The constraints are similar to SandyBridge with a few new events.
v2: Readd missing pebs_aliases
v3: Readd missing hunk. Fix some constraints.
v4: Fix typo in PEBS event table (Stephane Eranian)
Reviewed-by: Stephane Eranian
Signed-off-by
From: Andi Kleen
Add basic Haswell PMU support.
Similar to SandyBridge, but has a few new events and two
new counter bits.
There are some new counter flags that need to be prevented
from being set on fixed counters, and allowed to be set
for generic counters.
Also we add support for the
From: Andi Kleen
Recent Intel CPUs like Haswell and IvyBridge have a new alternative MSR
range for perfctrs that allows writing the full counter width. Enable this
range if the hardware reports it using a new capability bit.
This lowers the overhead of perf stat slightly because it has to do
From: Andi Kleen
Newer gcc enables the var-tracking pass with -g to keep track which
registers contain which variables. This is one of the slower passes in gcc.
With reduced debug info (aimed at objdump -S, but not using a full debugger)
we don't need this fine grained tracking. But i
ally know the internals of the omap2 driver
controller, but not being able to use smbus, definitely sucks!
> Will be fixed in next patchset
What about sending a patch V2? :)
Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message
ucture */
> + tc358765_i2c->client = client;
> +
> + /* store private data structure pointer on i2c_client structure */
> + i2c_set_clientdata(client, tc358765_i2c);
> +
> + /* init mutex */
> + mutex_init(&tc358765_i2c->xfer_lock);
> + de
hend workloads
is the number of cache lines touched in the submission path
(especially potentially modified ones). How does your new code fare on that?
-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.
rely on the community.
White listing is somewhat difficult because it affects the architectural
mode too.
I don't really expect problems from this change, we should probably
have always done it like this.
-Andi
--
a...@linux.intel.com -- Speaking for myself only.
--
To unsubscribe from thi
6,378,623,674 cycles
>
> 2.000663750 S0-C2 2 6,264,127,589 cycles
>
> 2.000663750 S0-C3 2 6,305,346,613 cycles
>
-Andi
--
To unsubscribe from this list: send the line "unsubs
takes in printf are common.
Better to duplicate the sprintf.
The rest looks good to me.
Reviewed-by: Andi Kleen
-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
appen.
I think Cx could be added relatively easily as a software event,
but frequency doesn't fit very well into the perf counting model,
as it's really sampling.
-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord.
From: Andi Kleen
Add support for the Haswell extended (fmt2) PEBS format.
It has a superset of the nhm (fmt1) PEBS fields, but has a longer record so
we need to adjust the code paths.
The main advantage is the new "EventingRip" support which directly
gives the instruction, not
From: Andi Kleen
This avoids some problems with spurious PMIs on Haswell.
Haswell seems to behave more like P4 in this regard. Do
the same thing as the P4 perf handler by unmasking
the NMI only at the end. Shouldn't make any difference
for earlier family 6 cores.
Tested on Haswell, IvyB
, add some comments and other minor changes.
Add some Reviewed/Tested-bys.
v5: Address some minor review feedback. Port to latest perf/core
v6: Add just some variable names, add comments, edit descriptions, some
more testing, rebased to latest perf/core
-Andi
--
To unsubscribe from this list: sen
From: Andi Kleen
Add basic Haswell PMU support.
Similar to SandyBridge, but has a few new events and two
new counter bits.
There are some new counter flags that need to be prevented
from being set on fixed counters, and allowed to be set
for generic counters.
Also we add support for the
From: Andi Kleen
Recent Intel CPUs like Haswell and IvyBridge have a new alternative MSR
range for perfctrs that allows writing the full counter width. Enable this
range if the hardware reports it using a new capability bit.
This lowers the overhead of perf stat slightly because it has to do
From: Andi Kleen
Add basic PEBS support for Haswell.
The constraints are similar to SandyBridge with a few new events.
v2: Readd missing pebs_aliases
v3: Readd missing hunk. Fix some constraints.
v4: Fix typo in PEBS event table (Stephane Eranian)
Reviewed-by: Stephane Eranian
Signed-off-by
((s64)from) << 3) >> 3);
> > + }
> > }
> >
> Wouldn't all that be more readable with a switch-case, especially given
> that lbr_format could be extended.
The current version works for me.
-Andi
--
To unsubscribe from thi
return -EIO;
> > + }
> same comment about -EIO vs. EOPNOTSUPP. sample_period is u64
> so, it's always >= 0. Where does this 31-bit limit come from?
That's what perf stat uses when running in the KVM guest.
> Experimentation?
The code does > 0, not >
period set by default when you
> just count?
Originally I had just > 0, but then I found that perf stat from the
guest doesn't work anymore because it sets an very high overflow
to accumulate counters.
The 0x7fff is a somewhat arbitary threshold to detect this case.
-Andi
--
a.
e test
> for freq=1.
I'm aware that there are some configs that can slip through, but there's
no other choice if we still allow guest perf stat. The cutoff is
somewhat arbitary.
It's not a correctness problem, just things can be unexpectedly slower
and you may see unexpected NM
On Thu, Jan 31, 2013 at 06:19:01PM +0100, Stephane Eranian wrote:
> Andi,
>
> Are you going to post a new version based on my feedback or do you stay
> with what you posted on 1/25?
I'm posting a new version today, already added all changes.
-Andi
--
To unsubscribe from this li
From: Andi Kleen
Add support for the v2 PEBS format. It has a superset of the v1 PEBS
fields, but has a longer record so we need to adjust the code paths.
The main advantage is the new "EventingRip" support which directly
gives the instruction, not off-by-one instruction. So with pr
From: Andi Kleen
Implement the TSX transaction and checkpointed transaction qualifiers for
Haswell. This allows e.g. to profile the number of cycles in transactions.
The checkpointed qualifier requires forcing the event to
counter 2, implement this with a custom constraint for Haswell.
Also
From: Andi Kleen
Recent Intel CPUs have a new alternative MSR range for perfctrs that allows
writing the full counter width. Enable this range if the hardware reports it
using a new capability bit. This lowers overhead of perf stat slightly because
it has to do less interrupts to accumulate the
details on TSX please see
http://halobates.de/adding-lock-elision-to-linux.pdf
or next week's LWN.
v2: Addressed Stephane's feedback. See individual patches for details.
-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a messag
From: Andi Kleen
Add basic PEBS support for Haswell.
The constraints are similar to SandyBridge with a few new events.
v2: Readd missing pebs_aliases
v3: Readd missing hunk. Fix some constraints.
v4: Fix typo in PEBS event table (Stephane Eranian)
Signed-off-by: Andi Kleen
---
arch/x86/kernel
From: Andi Kleen
Haswell has two additional LBR from flags for TSX: intx and abort, implemented
as a new v4 version of the LBR format.
Handle those in and adjust the sign extension code to still correctly extend.
The flags are exported similarly in the LBR record to the existing misprediction
From: Andi Kleen
When the LBR format is unknown disable LBR recording. This prevents
crashes when the LBR address is misdecoded and mis-sign extended.
Signed-off-by: Andi Kleen
---
arch/x86/kernel/cpu/perf_event_intel_lbr.c |3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff
From: Andi Kleen
Make perf record -j aware of the new in_tx,no_tx,abort_tx branch qualifiers.
v2: ABORT -> ABORTTX
v3: Add more _
Signed-off-by: Andi Kleen
---
tools/perf/Documentation/perf-record.txt |3 +++
tools/perf/builtin-record.c |3 +++
2 files changed
From: Andi Kleen
Add LBR filtering for branch in transaction, branch not in transaction
or transaction abort. This is exposed as new sample types.
v2: Rename ABORT to ABORTTX
v3: Use table instead of if
Signed-off-by: Andi Kleen
---
arch/x86/kernel/cpu/perf_event_intel_lbr.c | 58
From: Andi Kleen
This avoids some problems with spurious PMIs on Haswell.
Haswell seems to behave more like P4 in this regard. Do
the same thing as the P4 perf handler by unmasking
the NMI only at the end. Shouldn't make any difference
for earlier non P4 cores.
Signed-off-by: Andi
From: Andi Kleen
Add basic Haswell PMU support.
Similar to SandyBridge, but has a few new events. Further
differences are handled in followon patches.
There are some new counter flags that need to be prevented
from being set on fixed counters.
Contains fixes from Stephane Eranian
v2: Folded
From: Andi Kleen
With checkpointed counters there can be a situation where the counter
is overflowing, aborts the transaction, is set back to a non overflowing
checkpoint, causes interupt. The interrupt doesn't see the overflow
because it has been checkpointed. This is then a spuriou
From: Andi Kleen
Extend the perf branch sorting code to support sorting by intx
or abort qualifiers. Also print out those qualifiers.
This also fixes up some of the existing sort key documentation.
We do not support notx here, because it's simply not showing
the intx flag.
v2: Readd fla
e 10 straight forward patches (which have already reviewed by
others anyways). There's nothing particularly complicated or risky in
any of them, and you're a experienced kernel code reviewer.
-Andi
--
a...@linux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list:
.
For stat there's no really a compelling reason to integrate
it, the usual wrappers work as well. They have the advantage that
they can be written in real programing languages, instead of trying
to invent a new one.
Expressions integrated would be mainly useful for things like
"counting pe
From: Andi Kleen
This avoids some problems with spurious PMIs on Haswell.
Haswell seems to behave more like P4 in this regard. Do
the same thing as the P4 perf handler by unmasking
the NMI only at the end. Shouldn't make any difference
for earlier non P4 cores.
Signed-off-by: Andi
nd PEBS support
- Late unmasking of the PMI
- Support for wide counters
v2: Addressed Stephane's feedback. See individual patches for details.
v3: now even more bite-sized. Qualifier constraints merged earlier.
Add some Reviewed-bys.
-Andi
--
To unsubscribe from this list: send the line "uns
From: Andi Kleen
Add basic Haswell PMU support.
Similar to SandyBridge, but has a few new events and two
new counter bits.
There are some new counter flags that need to be prevented
from being set on fixed counters, and allowed to be set
for generic counters.
Also we add support for the
From: Andi Kleen
Add basic PEBS support for Haswell.
The constraints are similar to SandyBridge with a few new events.
v2: Readd missing pebs_aliases
v3: Readd missing hunk. Fix some constraints.
v4: Fix typo in PEBS event table (Stephane Eranian)
Reviewed-by: Stephane Eranian
Signed-off-by
From: Andi Kleen
Recent Intel CPUs have a new alternative MSR range for perfctrs that allows
writing the full counter width. Enable this range if the hardware reports it
using a new capability bit. This lowers overhead of perf stat slightly because
it has to do less interrupts to accumulate the
From: Andi Kleen
Add support for the v2 PEBS format. It has a superset of the v1 PEBS
fields, but has a longer record so we need to adjust the code paths.
The main advantage is the new "EventingRip" support which directly
gives the instruction, not off-by-one instruction. So with pr
From: Andi Kleen
Add basic Haswell PMU support.
Similar to SandyBridge, but has a few new events. Further
differences are handled in followon patches.
There are some new counter flags that need to be prevented
from being set on fixed counters.
Contains fixes from Stephane Eranian
v2: Folded
From: Andi Kleen
Haswell has two additional LBR from flags for TSX: intx and abort, implemented
as a new v4 version of the LBR format.
Handle those in and adjust the sign extension code to still correctly extend.
The flags are exported similarly in the LBR record to the existing misprediction
From: Andi Kleen
Extend the perf branch sorting code to support sorting by intx
or abort qualifiers. Also print out those qualifiers.
This also fixes up some of the existing sort key documentation.
We do not support notx here, because it's simply not showing
the intx flag.
v2: Readd fla
from
git://git.kernel.org/pub/scm/linux/kernel/git/ak/linux-misc hsw/pmu4-basics
For more details on the Haswell PMU please see the SDM. For more details on TSX
please see http://halobates.de/adding-lock-elision-to-linux.pdf
-Andi
--
To unsubscribe from this list: send the line "unsubscribe l
From: Andi Kleen
Recent Intel CPUs have a new alternative MSR range for perfctrs that allows
writing the full counter width. Enable this range if the hardware reports it
using a new capability bit. This lowers overhead of perf stat slightly because
it has to do less interrupts to accumulate the
From: Andi Kleen
Make perf record -j aware of the new in_tx,no_tx,abort_tx branch qualifiers.
v2: ABORT -> ABORTTX
v3: Add more _
Signed-off-by: Andi Kleen
---
tools/perf/Documentation/perf-record.txt |3 +++
tools/perf/builtin-record.c |3 +++
2 files changed
From: Andi Kleen
This avoids some problems with spurious PMIs on Haswell.
Haswell seems to behave more like P4 in this regard. Do
the same thing as the P4 perf handler by unmasking
the NMI only at the end. Shouldn't make any difference
for earlier non P4 cores.
Signed-off-by: Andi
From: Andi Kleen
When the LBR format is unknown disable LBR recording. This prevents
crashes when the LBR address is misdecoded and mis-sign extended.
Signed-off-by: Andi Kleen
---
arch/x86/kernel/cpu/perf_event_intel_lbr.c |3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff
From: Andi Kleen
With checkpointed counters there can be a situation where the counter
is overflowing, aborts the transaction, is set back to a non overflowing
checkpoint, causes interupt. The interrupt doesn't see the overflow
because it has been checkpointed. This is then a spuriou
From: Andi Kleen
Add support for the v2 PEBS format. It has a superset of the v1 PEBS
fields, but has a longer record so we need to adjust the code paths.
The main advantage is the new "EventingRip" support which directly
gives the instruction, not off-by-one instruction. So with pr
From: Andi Kleen
Add basic PEBS support for Haswell.
The constraints are similar to SandyBridge with a few new events.
v2: Readd missing pebs_aliases
v3: Readd missing hunk. Fix some constraints.
Signed-off-by: Andi Kleen
---
arch/x86/kernel/cpu/perf_event.h |2 ++
arch/x86
From: Andi Kleen
Implement the TSX transaction and checkpointed transaction qualifiers for
Haswell. This allows e.g. to profile the number of cycles in transactions.
The checkpointed qualifier requires forcing the event to
counter 2, implement this with a custom constraint for Haswell.
Also
From: Andi Kleen
Add LBR filtering for branch in transaction, branch not in transaction
or transaction abort. This is exposed as new sample types.
v2: Rename ABORT to ABORTTX
Signed-off-by: Andi Kleen
---
arch/x86/kernel/cpu/perf_event_intel_lbr.c | 31 +--
include
From: Andi Kleen
For some events it's useful to weight sample with a hardware
provided number. This expresses how expensive the action the
sample represent was. This allows the profiler to scale
the samples to be more informative to the programmer.
There is already the period which is
From: Andi Kleen
Add infrastructure to generate event aliases in /sys/devices/cpu/events/
And use this to set up user friendly aliases for the common TSX events.
TSX tuning relies heavily on the PMU, so it's important to be user friendly.
This replaces the generic transaction events
From: Andi Kleen
Add a precise qualifier, like cpu/event=0x3c,precise=1/
This is needed so that the kernel can request enabling PEBS
for TSX events. The parser bails out on any sysfs parse errors,
so this is needed in any case to handle any event on the TSX
perf kernel.
v2: Allow 3 as value
From: Andi Kleen
Add a instructions-p event alias that uses the PDIR randomized instruction
retirement event. This is useful to avoid some systematic sampling shadow
problems. Normally PEBS sampling has a systematic shadow. With PDIR
enabled the hardware adds some randomization that
From: Andi Kleen
Add a way for the CPU initialization code to register additional events,
and merge them into the events attribute directory. Used in the next
patch.
Signed-off-by: Andi Kleen
---
arch/x86/kernel/cpu/perf_event.c | 29 +
arch/x86/kernel/cpu
From: Andi Kleen
When an event fails to parse and it's not in a new style format,
try to parse it again as a cpu event.
This allows to use sysfs exported events directly without //, so I can use
perf record -e tx-aborts ...
instead of
perf record -e cpu/tx-aborts/
v2: Handle multiple e
From: Andi Kleen
perf record has a new option -W that enables weightened sampling.
Add sorting support in top/report for the average weight per sample and the
total weight sum. This allows to both compare relative cost per event
and the total cost over the measurement period.
Add the necessary
Make events_sysfs_show unstatic again to fix compilation]
Signed-off-by: Stephane Eranian
Signed-off-by: Andi Kleen
---
arch/x86/kernel/cpu/perf_event.c | 28 +---
arch/x86/kernel/cpu/perf_event.h | 26 ++
2 files changed, 39 insertions(+), 15 dele
From: Andi Kleen
Add histogram support for the transaction flags. Each flags instance becomes
a separate histogram. Support sorting and displaying the flags in report
and top.
The patch is fairly large, but it's really mostly just plumbing to pass the
flags around.
v2: Increase column
1 - 100 of 10458 matches
Mail list logo