Hi Steve,

On 06/12/2018 21:28, Steve Sistare wrote:
> When a CPU has no more CFS tasks to run, and idle_balance() fails to
> find a task, then attempt to steal a task from an overloaded CPU in the
> same LLC. Maintain and use a bitmap of overloaded CPUs to efficiently
> identify candidates.  To minimize search time, steal the first migratable
> task that is found when the bitmap is traversed.  For fairness, search
> for migratable tasks on an overloaded CPU in order of next to run.
> 
> This simple stealing yields a higher CPU utilization than idle_balance()
> alone, because the search is cheap, so it may be called every time the CPU
> is about to go idle.  idle_balance() does more work because it searches
> widely for the busiest queue, so to limit its CPU consumption, it declines
> to search if the system is too busy.  Simple stealing does not offload the
> globally busiest queue, but it is much better than running nothing at all.
> 
> The bitmap of overloaded CPUs is a new type of sparse bitmap, designed to
> reduce cache contention vs the usual bitmap when many threads concurrently
> set, clear, and visit elements.
> 
> Patch 1 defines the sparsemask type and its operations.
> 
> Patches 2, 3, and 4 implement the bitmap of overloaded CPUs.
> 
> Patches 5 and 6 refactor existing code for a cleaner merge of later
>   patches.
> 
> Patches 7 and 8 implement task stealing using the overloaded CPUs bitmap.
> 
> Patch 9 disables stealing on systems with more than 2 NUMA nodes for the
> time being because of performance regressions that are not due to stealing
> per-se.  See the patch description for details.
> 
> Patch 10 adds schedstats for comparing the new behavior to the old, and
>   provided as a convenience for developers only, not for integration.
> 
[...]

I've run my usual tests ([1]) on my HiKey960 with 

- Just stealing (only misfit tests)
- Stealing rebased on top of EAS (misfit + EAS tests), and with stealing
  gated by:

----->8-----
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 17ab4db..8b5172f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7152,7 +7152,8 @@ done: __maybe_unused;
        rq_idle_stamp_update(rq);
 
        new_tasks = idle_balance(rq, rf);
-       if (new_tasks == 0)
+       if (new_tasks == 0 &&
+           (!static_key_unlikely(&sched_energy_present) || 
READ_ONCE(rq->rd->overutilized))
                new_tasks = try_steal(rq, rf);
 
        if (new_tasks)
-----8<-----

It all looks good from my end - if things were to go wrong on big.LITTLE
platforms it'd be here. It might be a convoluted way of using this tag,
but you can have my

Tested-by: Valentin Schneider <valentin.schnei...@arm.com>

as a "it doesn't break my stuff" seal.



As far as the patches go, with my last comments in mind it looks good to me
so you can also have:

Reviewed-by: Valentin Schneider <valentin.schnei...@arm.com>

for patches [2-8]. I haven't delved on the sparsemask details. As for patch
9, you might want to run other benchmarks (Peter suggested specjbb) to see
if it is truly need.

[1]: https://github.com/ARM-software/lisa/tree/next/lisa/tests/kernel/scheduler

Reply via email to