Peter,
Working on it. Found other bugs and cleanups, so I will respin the
entire series.
It will be easier that way.
On Wed, Nov 6, 2013 at 11:52 AM, Peter Zijlstra wrote:
> On Tue, Nov 05, 2013 at 06:01:25PM +0100, Stephane Eranian wrote:
>> +static void rapl_exit_cpu(int cpu)
>> +{
>> + i
On Tue, Nov 05, 2013 at 06:01:25PM +0100, Stephane Eranian wrote:
> +static void rapl_exit_cpu(int cpu)
> +{
> + int i, phys_id = topology_physical_package_id(cpu);
> +
> + spin_lock(&rapl_hotplug_lock);
> +
> + /* if CPU not in RAPL mask, nothing to do */
> + if (!cpumask_test_and_
On Tue, Nov 05, 2013 at 06:01:25PM +0100, Stephane Eranian wrote:
> +static DEFINE_SPINLOCK(rapl_hotplug_lock);
> +static void rapl_exit_cpu(int cpu)
> +{
> + int i, phys_id = topology_physical_package_id(cpu);
> +
> + spin_lock(&rapl_hotplug_lock);
> + spin_unlock(&rapl_hotplug_lock)
On Tue, Nov 05, 2013 at 06:01:25PM +0100, Stephane Eranian wrote:
> +static int rapl_cpu_dying(int cpu)
> +{
> + struct rapl_pmu *pmu = per_cpu(rapl_pmu, cpu);
> + struct perf_event *event, *tmp;
> +
> + if (!pmu)
> + return 0;
> +
> + spin_lock(&rapl_hotplug_lock);
> +
This patch adds a new uncore PMU to expose the Intel
RAPL energy consumption counters. Up to 3 counters,
each counting a particular RAPL event are exposed.
The RAPL counters are available on Intel SandyBridge,
IvyBridge, Haswell. The server skus add a 3rd counter.
The following events are availab
5 matches
Mail list logo