When load_balance() fails to move some load because of task affinity,
we end up increasing sd->balance_interval to delay the next periodic
balance in the hopes that next time we look, that annoying pinned
task(s) will be gone.

However, idle_balance() pays no attention to sd->balance_interval, yet
it will still lead to an increase in balance_interval in case of
pinned tasks.

If we're going through several newidle balances (e.g. we have a
periodic task), this can lead to a huge increase of the
balance_interval in a very small amount of time.

To prevent that, don't increase the balance interval when going
through a newidle balance.

This is a similar approach to what is done in commit 58b26c4c0257
("sched: Increment cache_nice_tries only on periodic lb"), where we
disregard newidle balance and rely on periodic balance for more stable
results.

Signed-off-by: Valentin Schneider <valentin.schnei...@arm.com>
---
 kernel/sched/fair.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9cf93ba..4c33283 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8782,13 +8782,22 @@ static int load_balance(int this_cpu, struct rq 
*this_rq,
        sd->nr_balance_failed = 0;
 
 out_one_pinned:
+       ld_moved = 0;
+
+       /*
+        * idle_balance() disregards balance intervals, so we could repeatedly
+        * reach this code, which would lead to balance_interval skyrocketting
+        * in a short amount of time. Skip the balance_interval increase logic
+        * to avoid that.
+        */
+       if (env.idle == CPU_NEWLY_IDLE)
+               goto out;
+
        /* tune up the balancing interval */
        if ((env.flags & LBF_ALL_PINNED &&
             sd->balance_interval < MAX_PINNED_INTERVAL) ||
            sd->balance_interval < sd->max_interval)
                sd->balance_interval *= 2;
-
-       ld_moved = 0;
 out:
        return ld_moved;
 }
-- 
2.7.4

Reply via email to