On 2 April 2013 05:23, Alex Shi <alex....@intel.com> wrote:
> Except using runnable load average in background, move_tasks is also
> the key functions in load balance. We need consider the runnable load
> average in it in order to the apple to apple load comparison.
>
> Signed-off-by: Alex Shi <alex....@intel.com>
> ---
>  kernel/sched/fair.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 1f9026e..bf4e0d4 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3966,6 +3966,15 @@ static unsigned long task_h_load(struct task_struct 
> *p);
>
>  static const unsigned int sched_nr_migrate_break = 32;
>
> +static unsigned long task_h_load_avg(struct task_struct *p)
> +{
> +       u32 period = p->se.avg.runnable_avg_period;
> +       if (!period)
> +               return 0;
> +
> +       return task_h_load(p) * p->se.avg.runnable_avg_sum / period;

How do you ensure that runnable_avg_period and runnable_avg_sum are
coherent ? an update of the statistic can occur in the middle of your
sequence.

Vincent

> +}
> +
>  /*
>   * move_tasks tries to move up to imbalance weighted load from busiest to
>   * this_rq, as part of a balancing operation within domain "sd".
> @@ -4001,7 +4010,7 @@ static int move_tasks(struct lb_env *env)
>                 if (throttled_lb_pair(task_group(p), env->src_cpu, 
> env->dst_cpu))
>                         goto next;
>
> -               load = task_h_load(p);
> +               load = task_h_load_avg(p);
>
>                 if (sched_feat(LB_MIN) && load < 16 && 
> !env->sd->nr_balance_failed)
>                         goto next;
> --
> 1.7.12
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to