On Mon, Sep 23, 2013 at 07:11:21PM +0200, Stephane Eranian wrote:
> Ok so what you are saying is that the ovfl_status is not maintained private
> to each counter but shared among all PEBS counters by ucode. That's
> how you end up leaking between counters like that.
I only remember asking for clar
On Mon, Sep 23, 2013 at 5:33 PM, Peter Zijlstra wrote:
> On Mon, Sep 23, 2013 at 05:25:19PM +0200, Stephane Eranian wrote:
>> > Its not just a broken threshold. When a PEBS event happens it can re-arm
>> > itself but only if you program a RESET value !0. We don't do that, so
>> > each counter shou
On Mon, Sep 23, 2013 at 05:25:19PM +0200, Stephane Eranian wrote:
> > Its not just a broken threshold. When a PEBS event happens it can re-arm
> > itself but only if you program a RESET value !0. We don't do that, so
> > each counter should only ever fire once.
> >
> > We must do this because PEBS
On Mon, Sep 16, 2013 at 6:29 PM, Peter Zijlstra wrote:
> On Mon, Sep 16, 2013 at 05:41:46PM +0200, Ingo Molnar wrote:
>>
>> * Stephane Eranian wrote:
>>
>> > Hi,
>> >
>> > Some updates on this problem.
>> > I have been running tests all week-end long on my HSW.
>> > I can reproduce the problem. W
* Peter Zijlstra wrote:
> On Mon, Sep 16, 2013 at 05:41:46PM +0200, Ingo Molnar wrote:
> >
> > * Stephane Eranian wrote:
> >
> > > Hi,
> > >
> > > Some updates on this problem.
> > > I have been running tests all week-end long on my HSW.
> > > I can reproduce the problem. What I know:
> > >
On Mon, Sep 16, 2013 at 05:41:46PM +0200, Ingo Molnar wrote:
>
> * Stephane Eranian wrote:
>
> > Hi,
> >
> > Some updates on this problem.
> > I have been running tests all week-end long on my HSW.
> > I can reproduce the problem. What I know:
> >
> > - It is not linked with callchain
> > - Th
* Stephane Eranian wrote:
> Hi,
>
> Some updates on this problem.
> I have been running tests all week-end long on my HSW.
> I can reproduce the problem. What I know:
>
> - It is not linked with callchain
> - The extra entries are valid
> - The reset values are still zeroes
> - The problem doe
Hi,
Some updates on this problem.
I have been running tests all week-end long on my HSW.
I can reproduce the problem. What I know:
- It is not linked with callchain
- The extra entries are valid
- The reset values are still zeroes
- The problem does not happen on SNB with the same test case
- The
* Stephane Eranian wrote:
> On Tue, Sep 10, 2013 at 7:29 AM, Ingo Molnar wrote:
> >
> > * Stephane Eranian wrote:
> >
> >> On Tue, Sep 10, 2013 at 6:38 AM, Ingo Molnar wrote:
> >> >
> >> > * Stephane Eranian wrote:
> >> >
> >> >> Hi,
> >> >>
> >> >> Ok, so I am able to reproduce the problem
On Tue, Sep 10, 2013 at 5:28 PM, Peter Zijlstra wrote:
> On Tue, Sep 10, 2013 at 07:15:19AM -0700, Stephane Eranian wrote:
>> The threshold is where to generate the interrupt. It does not mean
>> where to stop PEBS recording.
>
> It does, since we don't set a reset value. So once a PEBS assist
> h
On Tue, Sep 10, 2013 at 07:15:19AM -0700, Stephane Eranian wrote:
> The threshold is where to generate the interrupt. It does not mean
> where to stop PEBS recording.
It does, since we don't set a reset value. So once a PEBS assist
happens, that counter stops until we reprogram it in the PMI.
> S
On Tue, Sep 10, 2013 at 7:29 AM, Ingo Molnar wrote:
>
> * Stephane Eranian wrote:
>
>> On Tue, Sep 10, 2013 at 6:38 AM, Ingo Molnar wrote:
>> >
>> > * Stephane Eranian wrote:
>> >
>> >> Hi,
>> >>
>> >> Ok, so I am able to reproduce the problem using a simpler
>> >> test case with a simple multi
* Stephane Eranian wrote:
> On Tue, Sep 10, 2013 at 6:38 AM, Ingo Molnar wrote:
> >
> > * Stephane Eranian wrote:
> >
> >> Hi,
> >>
> >> Ok, so I am able to reproduce the problem using a simpler
> >> test case with a simple multithreaded program where
> >> #threads >> #CPUs.
> >
> > Does it go
On Tue, Sep 10, 2013 at 6:38 AM, Ingo Molnar wrote:
>
> * Stephane Eranian wrote:
>
>> Hi,
>>
>> Ok, so I am able to reproduce the problem using a simpler
>> test case with a simple multithreaded program where
>> #threads >> #CPUs.
>
> Does it go away if you use 'perf record --all-cpus'?
>
Haven'
* Stephane Eranian wrote:
> Hi,
>
> Ok, so I am able to reproduce the problem using a simpler
> test case with a simple multithreaded program where
> #threads >> #CPUs.
Does it go away if you use 'perf record --all-cpus'?
> [ 2229.021934] WARNING: CPU: 6 PID: 17496 at
> arch/x86/kernel/cpu/pe
* Stephane Eranian wrote:
> On Tue, Sep 10, 2013 at 5:51 AM, Ramkumar Ramachandra
> wrote:
> > Stephane Eranian wrote:
> >> a simple multithreaded program where
> >> #threads >> #CPUs
> >
> > To put it another way, does Intel's HT work for CPU intensive and IO
> > minimal tasks? I think HT assu
On Tue, Sep 10, 2013 at 5:51 AM, Ramkumar Ramachandra
wrote:
> Stephane Eranian wrote:
>> a simple multithreaded program where
>> #threads >> #CPUs
>
> To put it another way, does Intel's HT work for CPU intensive and IO
> minimal tasks? I think HT assumes some amount of inefficient IO
> coupled w
Stephane Eranian wrote:
> a simple multithreaded program where
> #threads >> #CPUs
To put it another way, does Intel's HT work for CPU intensive and IO
minimal tasks? I think HT assumes some amount of inefficient IO
coupled with pure CPU usage.
--
To unsubscribe from this list: send the line "unsu
Stephane Eranian wrote:
> [ 2229.021966] Call Trace:
> [ 2229.021967][] dump_stack+0x46/0x58
> [ 2229.021976] [] warn_slowpath_common+0x8c/0xc0
> [ 2229.021979] [] warn_slowpath_fmt+0x46/0x50
> [ 2229.021982] [] intel_pmu_drain_pebs_hsw+0xa8/0xc0
> [ 2229.021986] [] intel_pmu_handle_irq+0x2
Hi,
Ok, so I am able to reproduce the problem using a simpler
test case with a simple multithreaded program where
#threads >> #CPUs.
[ 2229.021934] WARNING: CPU: 6 PID: 17496 at
arch/x86/kernel/cpu/perf_event_intel_ds.c:1003
intel_pmu_drain_pebs_hsw+0xa8/0xc0()
[ 2229.021936] Unexpected number of
* Stephane Eranian wrote:
> Hi,
>
>
> And what was the perf record command line for this crash?
AFAICS it wasn't a crash but the WARN_ON() in intel_pmu_drain_pebs_hsw(),
at arch/x86/kernel/cpu/perf_event_intel_ds.c:1003.
at = (struct pebs_record_hsw *)(unsigned long)ds->pebs_buffer
Hi,
And what was the perf record command line for this crash?
On Mon, Sep 9, 2013 at 12:05 PM, Peter Zijlstra wrote:
> On Sat, Sep 07, 2013 at 07:17:28PM -0700, Linus Torvalds wrote:
>> This is new for me, but I suspect it is more related to the new
>> Haswell CPU I have than necessarily the 3
On Tue, Sep 10, 2013 at 05:06:06PM +0900, Namhyung Kim wrote:
> Hi,
>
> On Thu, 5 Sep 2013 14:42:44 +0200, Frederic Weisbecker wrote:
> > On Thu, Sep 05, 2013 at 12:56:39PM +0200, Ingo Molnar wrote:
> >>
> >> (Cc:-ed Frederic and Namhyung as well, it's about bad overhead in
> >> tools/perf/util/
Hi,
On Thu, 5 Sep 2013 14:42:44 +0200, Frederic Weisbecker wrote:
> On Thu, Sep 05, 2013 at 12:56:39PM +0200, Ingo Molnar wrote:
>>
>> (Cc:-ed Frederic and Namhyung as well, it's about bad overhead in
>> tools/perf/util/hist.c.)
>>
>> * Linus Torvalds wrote:
>>
>> > On Tue, Sep 3, 2013 at 6:2
On Sat, Sep 07, 2013 at 07:17:28PM -0700, Linus Torvalds wrote:
> This is new for me, but I suspect it is more related to the new
> Haswell CPU I have than necessarily the 3.12 perf pull request.
>
> Regardless, nothing bad happened, but my dmesg has this in it:
>
>Unexpected number of pebs r
This is new for me, but I suspect it is more related to the new
Haswell CPU I have than necessarily the 3.12 perf pull request.
Regardless, nothing bad happened, but my dmesg has this in it:
Unexpected number of pebs records 10
when I was profiling some git workloads. Full trace appended.
* Ingo Molnar wrote:
>* 'perf report/top' enhancements:
>
> . Do annotation using /proc/kcore and /proc/kallsyms when
> available, removing the forced need for a vmlinux file kernel
> assembly annotation. This also improves this use case because
> vm
On Thu, Sep 05, 2013 at 02:51:01PM +0200, Ingo Molnar wrote:
>
> * Frederic Weisbecker wrote:
>
> > > Btw., a side note, append_chain() is a rather confusing function in
> > > itself, with logic-inversion gems like:
> > >
> > > if (!found)
> > > found =
* Frederic Weisbecker wrote:
> > Btw., a side note, append_chain() is a rather confusing function in
> > itself, with logic-inversion gems like:
> >
> > if (!found)
> > found = true;
>
> The check is pointless yeah, I'll remove that.
Are you sure it ca
On Thu, Sep 05, 2013 at 12:56:39PM +0200, Ingo Molnar wrote:
>
> (Cc:-ed Frederic and Namhyung as well, it's about bad overhead in
> tools/perf/util/hist.c.)
>
> * Linus Torvalds wrote:
>
> > On Tue, Sep 3, 2013 at 6:29 AM, Ingo Molnar wrote:
> > >
> > > Please pull the latest perf-core-for-l
(Cc:-ed Frederic and Namhyung as well, it's about bad overhead in
tools/perf/util/hist.c.)
* Linus Torvalds wrote:
> On Tue, Sep 3, 2013 at 6:29 AM, Ingo Molnar wrote:
> >
> > Please pull the latest perf-core-for-linus git tree from:
>
> I don't think this is new at all, but I just tried to
On Tue, Sep 3, 2013 at 6:29 AM, Ingo Molnar wrote:
>
> Please pull the latest perf-core-for-linus git tree from:
I don't think this is new at all, but I just tried to do a perf
record/report of "make -j64 test" on git:
It's a big perf.data file (1.6G), but after it has done the
"processing time
On Tue, 3 Sep 2013, Ingo Molnar wrote:
>* New ABI details:
> . Make Power7 events available via sysfs, by Runzhen Wang.
So we're really going to add 100+ Power7 events to the stable sysfs ABI?
Are all the new events listed under the sysfs ABI documentation?
Are we going to add all of
* Arnaldo Carvalho de Melo wrote:
> Em Tue, Sep 03, 2013 at 03:29:33PM +0200, Ingo Molnar escreveu:
> > Linus,
> >
> > Please pull the latest perf-core-for-linus git tree from:
>
> There were some misatributions I found, from memory, here, clarifying
> FWIW:
>
> >* 'perf test' enhancemen
Em Tue, Sep 03, 2013 at 03:29:33PM +0200, Ingo Molnar escreveu:
> Linus,
>
> Please pull the latest perf-core-for-linus git tree from:
There were some misatributions I found, from memory, here, clarifying
FWIW:
>* 'perf test' enhancements:
>
> . Add various improvements and fixes t
35 matches
Mail list logo