On 07/12/2020 10:15, Mel Gorman wrote:
> SIS_AVG_CPU was introduced as a means of avoiding a search when the
> average search cost indicated that the search would likely fail. It
> was a blunt instrument and disabled by 4c77b18cf8b7 ("sched/fair: Make
> select_idle_cpu() more aggressive") and later replaced with a proportional
> search depth by 1ad3aaf3fcd2 ("sched/core: Implement new approach to
> scale select_idle_cpu()").
> 
> While there are corner cases where SIS_AVG_CPU is better, it has now been
> disabled for almost three years. As the intent of SIS_PROP is to reduce
> the time complexity of select_idle_cpu(), lets drop SIS_AVG_CPU and focus
> on SIS_PROP as a throttling mechanism.
> 
> Signed-off-by: Mel Gorman <mgor...@techsingularity.net>
> ---
>  kernel/sched/fair.c     | 3 ---
>  kernel/sched/features.h | 1 -
>  2 files changed, 4 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 98075f9ea9a8..23934dbac635 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6161,9 +6161,6 @@ static int select_idle_cpu(struct task_struct *p, 
> struct sched_domain *sd, int t
>       avg_idle = this_rq()->avg_idle / 512;
>       avg_cost = this_sd->avg_scan_cost + 1;
>  
> -     if (sched_feat(SIS_AVG_CPU) && avg_idle < avg_cost)
> -             return -1;
> -
>       if (sched_feat(SIS_PROP)) {
>               u64 span_avg = sd->span_weight * avg_idle;
>               if (span_avg > 4*avg_cost)

Nitpick:

Since now avg_cost and avg_idle are only used w/ SIS_PROP, they could go
completely into the SIS_PROP if condition.

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 09f6f0edead4..fce9457cccb9 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6121,7 +6121,6 @@ static int select_idle_cpu(struct task_struct *p, struct 
sched_domain *sd, int t
 {
        struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
        struct sched_domain *this_sd;
-       u64 avg_cost, avg_idle;
        u64 time;
        int this = smp_processor_id();
        int cpu, nr = INT_MAX;
@@ -6130,14 +6129,13 @@ static int select_idle_cpu(struct task_struct *p, 
struct sched_domain *sd, int t
        if (!this_sd)
                return -1;
 
-       /*
-        * Due to large variance we need a large fuzz factor; hackbench in
-        * particularly is sensitive here.
-        */
-       avg_idle = this_rq()->avg_idle / 512;
-       avg_cost = this_sd->avg_scan_cost + 1;
-
        if (sched_feat(SIS_PROP)) {
+               /*
+                * Due to large variance we need a large fuzz factor; hackbench 
in
+                * particularly is sensitive here.
+                */
+               u64 avg_idle = this_rq()->avg_idle / 512;
+               u64 avg_cost = this_sd->avg_scan_cost + 1;
                u64 span_avg = sd->span_weight * avg_idle;
                if (span_avg > 4*avg_cost)
                        nr = div_u64(span_avg, avg_cost);
-- 
2.17.1

Reply via email to