Commit-ID:  fdccc3fb7a42ea4e4cd77d2fb8fa3a45c66ec0bf
Gitweb:     http://git.kernel.org/tip/fdccc3fb7a42ea4e4cd77d2fb8fa3a45c66ec0bf
Author:     leilei.lin <leilei....@alibaba-inc.com>
AuthorDate: Wed, 9 Aug 2017 08:29:21 +0800
Committer:  Ingo Molnar <mi...@kernel.org>
CommitDate: Thu, 10 Aug 2017 12:08:40 +0200

perf/core: Reduce context switch overhead

Skip most of the PMU context switching overhead when ctx->nr_events is 0.

50% performance overhead was observed under an extreme testcase.

Signed-off-by: leilei.lin <leilei....@alibaba-inc.com>
Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Cc: Linus Torvalds <torva...@linux-foundation.org>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: a...@kernel.org
Cc: alexander.shish...@linux.intel.com
Cc: eran...@gmail.com
Cc: jo...@redhat.com
Cc: linxiu...@gmail.com
Cc: yang_oli...@hotmail.com
Link: http://lkml.kernel.org/r/20170809002921.69813-1-leilei....@alibaba-inc.com
[ Rewrote the changelog. ]
Signed-off-by: Ingo Molnar <mi...@kernel.org>
---
 kernel/events/core.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index ee20d4c..d704e23 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -3211,6 +3211,13 @@ static void perf_event_context_sched_in(struct 
perf_event_context *ctx,
                return;
 
        perf_ctx_lock(cpuctx, ctx);
+       /*
+        * We must check ctx->nr_events while holding ctx->lock, such
+        * that we serialize against perf_install_in_context().
+        */
+       if (!ctx->nr_events)
+               goto unlock;
+
        perf_pmu_disable(ctx->pmu);
        /*
         * We want to keep the following priority order:
@@ -3224,6 +3231,8 @@ static void perf_event_context_sched_in(struct 
perf_event_context *ctx,
                cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE);
        perf_event_sched_in(cpuctx, ctx, task);
        perf_pmu_enable(ctx->pmu);
+
+unlock:
        perf_ctx_unlock(cpuctx, ctx);
 }
 

Reply via email to