Commit-ID:  9344c92c2e72e495f695caef8364b3dd73af0eab
Gitweb:     http://git.kernel.org/tip/9344c92c2e72e495f695caef8364b3dd73af0eab
Author:     Rik van Riel <r...@redhat.com>
AuthorDate: Wed, 10 Feb 2016 20:08:26 -0500
Committer:  Ingo Molnar <mi...@kernel.org>
CommitDate: Mon, 29 Feb 2016 09:53:09 +0100

time, acct: Drop irq save & restore from __acct_update_integrals()

It looks like all the call paths that lead to __acct_update_integrals()
already have irqs disabled, and __acct_update_integrals() does not need
to disable irqs itself.

This is very convenient since about half the CPU time left in this
function was spent in local_irq_save alone.

Performance of a microbenchmark that calls an invalid syscall
ten million times in a row on a nohz_full CPU improves 21% vs.
4.5-rc1 with both the removal of divisions from __acct_update_integrals()
and this patch, with runtime dropping from 3.7 to 2.9 seconds.

With these patches applied, the highest remaining cpu user in
the trace is native_sched_clock, which is addressed in the next
patch.

For testing purposes I stuck a WARN_ON(!irqs_disabled()) test
in __acct_update_integrals(). It did not trigger.

Suggested-by: Peter Zijlstra <pet...@infradead.org>
Signed-off-by: Rik van Riel <r...@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Reviewed-by: Thomas Gleixner <t...@linutronix.de>
Cc: Linus Torvalds <torva...@linux-foundation.org>
Cc: Mike Galbraith <efa...@gmx.de>
Cc: cl...@redhat.com
Cc: eric.duma...@gmail.com
Cc: fweis...@gmail.com
Cc: l...@amacapital.net
Link: http://lkml.kernel.org/r/1455152907-18495-4-git-send-email-r...@redhat.com
Signed-off-by: Ingo Molnar <mi...@kernel.org>
---
 kernel/tsacct.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/kernel/tsacct.c b/kernel/tsacct.c
index d12e815..f8e26ab 100644
--- a/kernel/tsacct.c
+++ b/kernel/tsacct.c
@@ -126,20 +126,18 @@ static void __acct_update_integrals(struct task_struct 
*tsk,
                                    cputime_t utime, cputime_t stime)
 {
        cputime_t time, dtime;
-       unsigned long flags;
        u64 delta;
 
        if (!likely(tsk->mm))
                return;
 
-       local_irq_save(flags);
        time = stime + utime;
        dtime = time - tsk->acct_timexpd;
        /* Avoid division: cputime_t is often in nanoseconds already. */
        delta = cputime_to_nsecs(dtime);
 
        if (delta < TICK_NSEC)
-               goto out;
+               return;
 
        tsk->acct_timexpd = time;
        /*
@@ -149,8 +147,6 @@ static void __acct_update_integrals(struct task_struct *tsk,
         */
        tsk->acct_rss_mem1 += delta * get_mm_rss(tsk->mm) >> 10;
        tsk->acct_vm_mem1 += delta * tsk->mm->total_vm >> 10;
-out:
-       local_irq_restore(flags);
 }
 
 /**
@@ -160,9 +156,12 @@ out:
 void acct_update_integrals(struct task_struct *tsk)
 {
        cputime_t utime, stime;
+       unsigned long flags;
 
+       local_irq_save(flags);
        task_cputime(tsk, &utime, &stime);
        __acct_update_integrals(tsk, utime, stime);
+       local_irq_restore(flags);
 }
 
 /**

Reply via email to