On Mon, 2008-01-28 at 13:32 +0100, Ingo Molnar wrote:
> * Peter Zijlstra <[EMAIL PROTECTED]> wrote:
> 
> > >       * With CONFIG_FAIR_USER_SCHED disabled, there are severe
> > >         interactivity hickups with a niced CPU hog and top running. This
> > >         started with commit 810e95ccd58d91369191aa4ecc9e6d4a10d8d0c8. 
> > 
> > The revert at the bottom causes the wakeup granularity to shrink for + 
> > nice and to grow for - nice. That is, it becomes easier to preempt a + 
> > nice task, and harder to preempt a - nice task.
> 
> i think it would be OK to do half of this: make it easier to preempt a 
> +nice task. Michel, do you really need the -nice portion as well? It's 
> not a problem to super-preempt positively reniced tasks, but it can be 
> quite annoying if negatively reniced tasks have super-slices.

This should do that (unless I need a stronger cup of tea).

---
Index: linux-2.6/kernel/sched_fair.c
===================================================================
--- linux-2.6.orig/kernel/sched_fair.c
+++ linux-2.6/kernel/sched_fair.c
@@ -1106,7 +1106,11 @@ static void check_preempt_wakeup(struct 
        }
 
        gran = sysctl_sched_wakeup_granularity;
-       if (unlikely(se->load.weight != NICE_0_LOAD))
+       /*
+        * More easily preempt - nice tasks, while not making
+        * it harder for + nice tasks.
+        */
+       if (unlikely(se->load.weight > NICE_0_LOAD))
                gran = calc_delta_fair(gran, &se->load);
 
        if (pse->vruntime + gran < se->vruntime)


_______________________________________________
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev

Reply via email to