Commit-ID:  bd903afeb504db5655a45bb4cf86f38be5b1bf62
Gitweb:     https://git.kernel.org/tip/bd903afeb504db5655a45bb4cf86f38be5b1bf62
Author:     Song Liu <songliubrav...@fb.com>
AuthorDate: Mon, 5 Mar 2018 21:55:04 -0800
Committer:  Ingo Molnar <mi...@kernel.org>
CommitDate: Fri, 9 Mar 2018 08:03:02 +0100

perf/core: Fix ctx_event_type in ctx_resched()

In ctx_resched(), EVENT_FLEXIBLE should be sched_out when EVENT_PINNED is
added. However, ctx_resched() calculates ctx_event_type before checking
this condition. As a result, pinned events will NOT get higher priority
than flexible events.

The following shows this issue on an Intel CPU (where ref-cycles can
only use one hardware counter).

  1. First start:
       perf stat -C 0 -e ref-cycles  -I 1000
  2. Then, in the second console, run:
       perf stat -C 0 -e ref-cycles:D -I 1000

The second perf uses pinned events, which is expected to have higher
priority. However, because it failed in ctx_resched(). It is never
run.

This patch fixes this by calculating ctx_event_type after re-evaluating
event_type.

Reported-by: Ephraim Park <ephiep...@fb.com>
Signed-off-by: Song Liu <songliubrav...@fb.com>
Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Cc: <jo...@redhat.com>
Cc: <kernel-t...@fb.com>
Cc: Alexander Shishkin <alexander.shish...@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <a...@redhat.com>
Cc: Jiri Olsa <jo...@redhat.com>
Cc: Linus Torvalds <torva...@linux-foundation.org>
Cc: Stephane Eranian <eran...@google.com>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Vince Weaver <vincent.wea...@maine.edu>
Fixes: 487f05e18aa4 ("perf/core: Optimize event rescheduling on active 
contexts")
Link: http://lkml.kernel.org/r/20180306055504.3283731-1-songliubrav...@fb.com
Signed-off-by: Ingo Molnar <mi...@kernel.org>
---
 kernel/events/core.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 96db9ae5d5af..4b838470fac4 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -2246,7 +2246,7 @@ static void ctx_resched(struct perf_cpu_context *cpuctx,
                        struct perf_event_context *task_ctx,
                        enum event_type_t event_type)
 {
-       enum event_type_t ctx_event_type = event_type & EVENT_ALL;
+       enum event_type_t ctx_event_type;
        bool cpu_event = !!(event_type & EVENT_CPU);
 
        /*
@@ -2256,6 +2256,8 @@ static void ctx_resched(struct perf_cpu_context *cpuctx,
        if (event_type & EVENT_PINNED)
                event_type |= EVENT_FLEXIBLE;
 
+       ctx_event_type = event_type & EVENT_ALL;
+
        perf_pmu_disable(cpuctx->ctx.pmu);
        if (task_ctx)
                task_ctx_sched_out(cpuctx, task_ctx, event_type);

Reply via email to