From: Wanpeng Li <wanpeng...@hotmail.com>

I observed that sometimes st is 100% instantaneous, then idle is 100% 
even if there is a cpu hog on the guest cpu after the cpu hotplug comes 
back(N.B. this can not always be readily reproduced). I add trace to 
capture it as below:

cpuhp/1-12    [001] d.h1   167.461657: account_process_tick: steal = 
1291385514, prev_steal_time = 0         
cpuhp/1-12    [001] d.h1   167.461659: account_process_tick: steal_jiffies = 
1291          
<idle>-0     [001] d.h1   167.462663: account_process_tick: steal = 18732255, 
prev_steal_time = 1291000000          
<idle>-0     [001] d.h1   167.462664: account_process_tick: steal_jiffies = 
18446744072437

The steal clock warp and then steal_jiffies overflow.

Rik also pointed out to me:
 
| I have seen stuff like that with live migration too, in the past 

This patch adds steal clock warp handling by a safe threshold to only 
apply steal times that are positive and smaller than one second (as 
long as nohz_full has the one second timer tick left), ignoring intervals 
that are negative or longer than a second, and using those to sync up 
the guest with the host.

Cc: Ingo Molnar <mi...@kernel.org>
Cc: Peter Zijlstra (Intel) <pet...@infradead.org>
Cc: Rik van Riel <r...@redhat.com>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Frederic Weisbecker <fweis...@gmail.com>
Cc: Paolo Bonzini <pbonz...@redhat.com>
Cc: Radim <rkrc...@redhat.com>
Signed-off-by: Wanpeng Li <wanpeng...@hotmail.com>
---
v1 -> v2:
 * update patch subject, description and comments
 * deal with the case where steal time suddenly increases by a ludicrous amount

 kernel/sched/cputime.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index f51c98c..751798a 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -262,17 +262,28 @@ static __always_inline unsigned long 
steal_account_process_tick(void)
 #ifdef CONFIG_PARAVIRT
        if (static_key_false(&paravirt_steal_enabled)) {
                u64 steal;
+               s64 delta;
                unsigned long steal_jiffies;
 
                steal = paravirt_steal_clock(smp_processor_id());
-               steal -= this_rq()->prev_steal_time;
+               delta = steal - this_rq()->prev_steal_time;
+               /*
+                * Ignore this steal time difference if the guest and the host 
got
+                * out of sync. This can happen due to events like live 
migration,
+                * or CPU hotplug. The upper threshold is set to one second to 
match
+                * the one second timer tick with nohz_full.
+                */
+               if (unlikely(delta < 0 || delta > NSEC_PER_SEC)) {
+                       this_rq()->prev_steal_time = steal;
+                       return 0;
+               }
 
                /*
                 * steal is in nsecs but our caller is expecting steal
                 * time in jiffies. Lets cast the result to jiffies
                 * granularity and account the rest on the next rounds.
                 */
-               steal_jiffies = nsecs_to_jiffies(steal);
+               steal_jiffies = nsecs_to_jiffies(delta);
                this_rq()->prev_steal_time += jiffies_to_nsecs(steal_jiffies);
 
                account_steal_time(jiffies_to_cputime(steal_jiffies));
-- 
1.9.1

Reply via email to