On Tue, 27 Dec 2016, David Carrillo-Cisneros wrote:
On Tue, Dec 27, 2016 at 3:10 PM, Andi Kleen wrote:
On Tue, Dec 27, 2016 at 01:33:46PM -0800, David Carrillo-Cisneros wrote:
When using one intel_cmt/llc_occupancy/ cgroup perf_event in one CPU, the
avg time to do __perf_event_task_sched_ou
On Tue, Dec 27, 2016 at 3:10 PM, Andi Kleen wrote:
> On Tue, Dec 27, 2016 at 01:33:46PM -0800, David Carrillo-Cisneros wrote:
>> When using one intel_cmt/llc_occupancy/ cgroup perf_event in one CPU, the
>> avg time to do __perf_event_task_sched_out + __perf_event_task_sched_in is
>> ~1170ns
>>
>>
On Tue, Dec 27, 2016 at 01:33:46PM -0800, David Carrillo-Cisneros wrote:
> When using one intel_cmt/llc_occupancy/ cgroup perf_event in one CPU, the
> avg time to do __perf_event_task_sched_out + __perf_event_task_sched_in is
> ~1170ns
>
> most of the time is spend in cgroup ctx switch (~1120ns) .
The perf overhead i was thinking atleast was during the context switch which
> is the more constant overhead (the event creation is just one time).
>
> -I was trying to see an alternative where
> 1.user specifies the continuous monitor with perf-attr in open
> 2.driver allocates the task/cgroup RM
On Tue, Dec 27, 2016 at 12:00 PM, Andi Kleen wrote:
> Shivappa Vikas writes:
>>
>> Ok , looks like the interface is the problem. Will try to fix
>> this. We are just trying to have a light weight monitoring
>> option so that its reasonable to monitor for a
>> very long time (like lifetime of pro
On Tue, 27 Dec 2016, Andi Kleen wrote:
Shivappa Vikas writes:
Ok , looks like the interface is the problem. Will try to fix
this. We are just trying to have a light weight monitoring
option so that its reasonable to monitor for a
very long time (like lifetime of process etc). Mainly to not
Shivappa Vikas writes:
>
> Ok , looks like the interface is the problem. Will try to fix
> this. We are just trying to have a light weight monitoring
> option so that its reasonable to monitor for a
> very long time (like lifetime of process etc). Mainly to not have all
> the perf scheduling over
> +LAZY and NOLAZY Monitoring
> +--
> +LAZY:
> +By default when monitoring is enabled, the RMIDs are not allocated
> +immediately and allocated lazily only at the first sched_in.
> +There are 2-4 RMIDs per logical processor on each package. So if a
>>
On Fri, 23 Dec 2016, Peter Zijlstra wrote:
On Fri, Dec 23, 2016 at 11:35:03AM -0800, Shivappa Vikas wrote:
Hello Peterz,
On Fri, 23 Dec 2016, Peter Zijlstra wrote:
On Fri, Dec 16, 2016 at 03:12:55PM -0800, Vikas Shivappa wrote:
+Continuous monitoring
+-
+A new file co
On Fri, 23 Dec 2016, Peter Zijlstra wrote:
Also, the 'whoops you ran out of RMIDs, please reboot' thing totally and
completely blows.
Well, this is really a hardware limitation. User cannot monitor more events on a
package than # of RMIDs at the *same time*. The 10/14 reuses the RMIDs that
On Fri, Dec 23, 2016 at 11:35:03AM -0800, Shivappa Vikas wrote:
>
> Hello Peterz,
>
> On Fri, 23 Dec 2016, Peter Zijlstra wrote:
>
> >On Fri, Dec 16, 2016 at 03:12:55PM -0800, Vikas Shivappa wrote:
> >>+Continuous monitoring
> >>+-
> >>+A new file cont_monitoring is added to
Hello Peterz,
On Fri, 23 Dec 2016, Peter Zijlstra wrote:
On Fri, Dec 16, 2016 at 03:12:55PM -0800, Vikas Shivappa wrote:
+Continuous monitoring
+-
+A new file cont_monitoring is added to perf_cgroup which helps to enable
+cqm continuous monitoring. Enabling this field woul
On Fri, Dec 16, 2016 at 03:12:55PM -0800, Vikas Shivappa wrote:
> +Continuous monitoring
> +-
> +A new file cont_monitoring is added to perf_cgroup which helps to enable
> +cqm continuous monitoring. Enabling this field would start monitoring of
> +the cgroup without perf being
Add documentation of usage of cqm and mbm events, continuous monitoring,
lazy and non-lazy monitoring.
Signed-off-by: Vikas Shivappa
---
Documentation/x86/intel_rdt_mon_ui.txt | 91 ++
1 file changed, 91 insertions(+)
create mode 100644 Documentation/x86/intel_rd
14 matches
Mail list logo