On Tue, Jan 3, 2017 at 4:00 AM, Mark Rutland wrote:
> On Sun, Jan 01, 2017 at 01:18:27PM -0800, David Carrillo-Cisneros wrote:
>> On Sun, Jan 1, 2017 at 12:20 PM, David Carrillo-Cisneros
>> wrote:
>> > From: Mark Rutland
>> >
>> > On Thu, Nov 10, 2016 at 05:26:32PM +0100, Peter Zijlstra wrote:
>
On Sun, Jan 01, 2017 at 01:18:27PM -0800, David Carrillo-Cisneros wrote:
> On Sun, Jan 1, 2017 at 12:20 PM, David Carrillo-Cisneros
> wrote:
> > From: Mark Rutland
> >
> > On Thu, Nov 10, 2016 at 05:26:32PM +0100, Peter Zijlstra wrote:
> >> On Thu, Nov 10, 2016 at 02:10:37PM +, Mark Rutland w
On Sun, Jan 1, 2017 at 12:20 PM, David Carrillo-Cisneros
wrote:
> From: Mark Rutland
>
> On Thu, Nov 10, 2016 at 05:26:32PM +0100, Peter Zijlstra wrote:
>> On Thu, Nov 10, 2016 at 02:10:37PM +, Mark Rutland wrote:
>>
>> > Sure, that sounds fine for scheduling (including big.LITTLE).
>> >
>> >
On Thu, Nov 10, 2016 at 05:26:32PM +0100, Peter Zijlstra wrote:
> On Thu, Nov 10, 2016 at 02:10:37PM +, Mark Rutland wrote:
>
> > Sure, that sounds fine for scheduling (including big.LITTLE).
> >
> > I might still be misunderstanding something, but I don't think that
> > helps Kan's case: sin
On Thu, Nov 10, 2016 at 02:10:37PM +, Mark Rutland wrote:
> Sure, that sounds fine for scheduling (including big.LITTLE).
>
> I might still be misunderstanding something, but I don't think that
> helps Kan's case: since INACTIVE events which will fail their filters
> (including the CPU check)
> > I don't think those need be tracked at all, they're immaterial for
> > actual scheduling. Once we ioctl() them back to life we can insert
> > them into the tree.
>
> Sure, that sounds fine for scheduling (including big.LITTLE).
>
> I might still be misunderstanding something, but I don't th
On Thu, Nov 10, 2016 at 01:58:04PM +0100, Peter Zijlstra wrote:
> On Thu, Nov 10, 2016 at 12:26:18PM +, Mark Rutland wrote:
> > On Thu, Nov 10, 2016 at 01:12:53PM +0100, Peter Zijlstra wrote:
>
> > > Ah, so the tree would in fact only contain 'INACTIVE' events :-)
> >
> > Ah. :)
> >
> > That
On Thu, Nov 10, 2016 at 12:26:18PM +, Mark Rutland wrote:
> On Thu, Nov 10, 2016 at 01:12:53PM +0100, Peter Zijlstra wrote:
> > Ah, so the tree would in fact only contain 'INACTIVE' events :-)
>
> Ah. :)
>
> That explains some of the magic, but...
>
> > That is, when no events are on the ha
On Thu, Nov 10, 2016 at 01:12:53PM +0100, Peter Zijlstra wrote:
> On Thu, Nov 10, 2016 at 12:04:23PM +, Mark Rutland wrote:
> > On Thu, Nov 10, 2016 at 12:37:05PM +0100, Peter Zijlstra wrote:
>
> > > So the problem is finding which events are active when.
> > > If we stick all events in an RB
On Thu, Nov 10, 2016 at 12:04:23PM +, Mark Rutland wrote:
> On Thu, Nov 10, 2016 at 12:37:05PM +0100, Peter Zijlstra wrote:
> > So the problem is finding which events are active when.
>
> Sure.
>
> If we only care about PERF_EVENT_STATE_ACTIVE, then I think we can
> fairly easily maintain a
On Thu, Nov 10, 2016 at 12:37:05PM +0100, Peter Zijlstra wrote:
> On Thu, Nov 10, 2016 at 11:05:17AM +, Mark Rutland wrote:
> > On Thu, Nov 10, 2016 at 09:33:55AM +0100, Peter Zijlstra wrote:
> > > Yes this is a problem, but no this cannot be done. We can't have per-cpu
> > > storage per task.
On Thu, Nov 10, 2016 at 11:05:17AM +, Mark Rutland wrote:
> Hi,
>
> On Thu, Nov 10, 2016 at 09:33:55AM +0100, Peter Zijlstra wrote:
> > Yes this is a problem, but no this cannot be done. We can't have per-cpu
> > storage per task. That rapidly explodes.
> >
> > Mark is looking at replacing th
Hi,
On Thu, Nov 10, 2016 at 09:33:55AM +0100, Peter Zijlstra wrote:
> Yes this is a problem, but no this cannot be done. We can't have per-cpu
> storage per task. That rapidly explodes.
>
> Mark is looking at replacing this stuff with an rb-tree for big-little,
> that would also allow improving t
Yes this is a problem, but no this cannot be done. We can't have per-cpu
storage per task. That rapidly explodes.
Mark is looking at replacing this stuff with an rb-tree for big-little,
that would also allow improving this I think.
From: Kan Liang
The perf per-process monitoring overhead increases rapidly with the
increasing of events# and CPU#.
Here is some data from the overhead test on Skylake server which has 64
logical CPU. Elapsed time of AIM7 is used to measure the overhead.
perf record -e $event_list -p $pi
15 matches
Mail list logo