On Thu, Aug 23, 2007 at 02:13:41PM +0200, Ingo Molnar wrote:
> 
> * Ingo Molnar <[EMAIL PROTECTED]> wrote:
> 
> > [...] So how about the patch below instead?
> 
> the right patch attached.
> 
> -------------------------------->
> Subject: sched: fix broken SMT/MC optimizations
> From: "Siddha, Suresh B" <[EMAIL PROTECTED]>
> 
> On a four package system with HT - HT load balancing optimizations were
> broken.  For example, if two tasks end up running on two logical threads
> of one of the packages, scheduler is not able to pull one of the tasks
> to a completely idle package.
> 
> In this scenario, for nice-0 tasks, imbalance calculated by scheduler
> will be 512 and find_busiest_queue() will return 0 (as each cpu's load
> is 1024 > imbalance and has only one task running).
> 
> Similarly MC scheduler optimizations also get fixed with this patch.
> 
> [ [EMAIL PROTECTED]: restored fair balancing by increasing the fuzz and
>                  adding it back to the power decision, without the /2
>                  factor. ]
> 
> Signed-off-by: Suresh Siddha <[EMAIL PROTECTED]>
> Signed-off-by: Ingo Molnar <[EMAIL PROTECTED]>
> ---
> 
>  include/linux/sched.h |    2 +-
>  kernel/sched.c        |    2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> Index: linux/include/linux/sched.h
> ===================================================================
> --- linux.orig/include/linux/sched.h
> +++ linux/include/linux/sched.h
> @@ -681,7 +681,7 @@ enum cpu_idle_type {
>  #define SCHED_LOAD_SHIFT     10
>  #define SCHED_LOAD_SCALE     (1L << SCHED_LOAD_SHIFT)
>  
> -#define SCHED_LOAD_SCALE_FUZZ        (SCHED_LOAD_SCALE >> 1)
> +#define SCHED_LOAD_SCALE_FUZZ        SCHED_LOAD_SCALE
>  
>  #ifdef CONFIG_SMP
>  #define SD_LOAD_BALANCE              1       /* Do load balancing on this 
> domain. */
> Index: linux/kernel/sched.c
> ===================================================================
> --- linux.orig/kernel/sched.c
> +++ linux/kernel/sched.c
> @@ -2517,7 +2517,7 @@ group_next:
>        * a think about bumping its value to force at least one task to be
>        * moved
>        */
> -     if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task/2) {
> +     if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task) {

Ingo, this is still broken. This condition is always false for nice-0 tasks..

thanks,
suresh
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to