4.4-stable review patch. If anyone has any objections, please let me know.
------------------ From: Peter Zijlstra <pet...@infradead.org> commit 173be9a14f7b2e901cf77c18b1aafd4d672e9d9e upstream. Mike reports: Roughly 10% of the time, ltp testcase getrusage04 fails: getrusage04 0 TINFO : Expected timers granularity is 4000 us getrusage04 0 TINFO : Using 1 as multiply factor for max [us]time increment (1000+4000us)! getrusage04 0 TINFO : utime: 0us; stime: 179us getrusage04 0 TINFO : utime: 3751us; stime: 0us getrusage04 1 TFAIL : getrusage04.c:133: stime increased > 5000us: And tracked it down to the case where the task simply doesn't get _any_ [us]time ticks. Update the code to assume all rtime is utime when we lack information, thus ensuring a task that elides the tick gets time accounted. Reported-by: Mike Galbraith <umgwanakikb...@gmail.com> Tested-by: Mike Galbraith <umgwanakikb...@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org> Cc: Frederic Weisbecker <fweis...@gmail.com> Cc: Fredrik Markstrom <fredrik.markst...@gmail.com> Cc: Linus Torvalds <torva...@linux-foundation.org> Cc: Paolo Bonzini <pbonz...@redhat.com> Cc: Peter Zijlstra <pet...@infradead.org> Cc: Radim <rkrc...@redhat.com> Cc: Rik van Riel <r...@redhat.com> Cc: Stephane Eranian <eran...@google.com> Cc: Thomas Gleixner <t...@linutronix.de> Cc: Vince Weaver <vincent.wea...@maine.edu> Cc: Wanpeng Li <wanpeng...@hotmail.com> Fixes: 9d7fb0427648 ("sched/cputime: Guarantee stime + utime == rtime") Signed-off-by: Ingo Molnar <mi...@kernel.org> Signed-off-by: Greg Kroah-Hartman <gre...@linuxfoundation.org> --- kernel/sched/cputime.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) --- a/kernel/sched/cputime.c +++ b/kernel/sched/cputime.c @@ -600,19 +600,25 @@ static void cputime_adjust(struct task_c stime = curr->stime; utime = curr->utime; - if (utime == 0) { - stime = rtime; + /* + * If either stime or both stime and utime are 0, assume all runtime is + * userspace. Once a task gets some ticks, the monotonicy code at + * 'update' will ensure things converge to the observed ratio. + */ + if (stime == 0) { + utime = rtime; goto update; } - if (stime == 0) { - utime = rtime; + if (utime == 0) { + stime = rtime; goto update; } stime = scale_stime((__force u64)stime, (__force u64)rtime, (__force u64)(stime + utime)); +update: /* * Make sure stime doesn't go backwards; this preserves monotonicity * for utime because rtime is monotonic. @@ -635,7 +641,6 @@ static void cputime_adjust(struct task_c stime = rtime - utime; } -update: prev->stime = stime; prev->utime = utime; out: