* Siddha, Suresh B <[EMAIL PROTECTED]> wrote:
> > Seems this didn't get merged? Latest git as of today still has the code
> > as it was before this patch.
>
> This is must fix for .23 and Ingo previously mentioned that he will push it
> for .23
yep, it's queued up and it will send it with the n
On Tue, Sep 04, 2007 at 07:35:21PM -0400, Chuck Ebbert wrote:
> On 08/28/2007 06:27 PM, Siddha, Suresh B wrote:
> > Try to fix MC/HT scheduler optimization breakage again, with out breaking
> > the FUZZ logic.
> >
> > First fix the check
> > if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_loa
On 08/28/2007 06:27 PM, Siddha, Suresh B wrote:
> On Mon, Aug 27, 2007 at 12:31:03PM -0700, Siddha, Suresh B wrote:
>> Essentially I observed that nice 0 tasks still endup on two cores of same
>> package, with out getting spread out to two different packages. This behavior
>> is same with out this
* Siddha, Suresh B <[EMAIL PROTECTED]> wrote:
> On Mon, Aug 27, 2007 at 12:31:03PM -0700, Siddha, Suresh B wrote:
> > Essentially I observed that nice 0 tasks still endup on two cores of same
> > package, with out getting spread out to two different packages. This
> > behavior
> > is same with o
On Mon, Aug 27, 2007 at 12:31:03PM -0700, Siddha, Suresh B wrote:
> Essentially I observed that nice 0 tasks still endup on two cores of same
> package, with out getting spread out to two different packages. This behavior
> is same with out this fix and this fix doesn't help in any way.
Ingo, Appe
On Mon, Aug 27, 2007 at 09:23:24PM +0200, Ingo Molnar wrote:
>
> * Siddha, Suresh B <[EMAIL PROTECTED]> wrote:
>
> > > - if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task/2) {
> > > + if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task) {
> >
> > Ingo, this is still broke
* Siddha, Suresh B <[EMAIL PROTECTED]> wrote:
> > - if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task/2) {
> > + if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task) {
>
> Ingo, this is still broken. This condition is always false for nice-0
> tasks..
yes - negative
On Thu, Aug 23, 2007 at 02:13:41PM +0200, Ingo Molnar wrote:
>
> * Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> > [...] So how about the patch below instead?
>
> the right patch attached.
>
> >
> Subject: sched: fix broken SMT/MC optimizations
> From: "Siddha, Sure
On 8/23/07, Ingo Molnar <[EMAIL PROTECTED]> wrote:
> with no patch, or with my patch below each gets ~66% of CPU time,
> long-term:
>
> PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND
> 2290 mingo 20 0 2736 528 252 R 67 0.0 3:22.95 bash
> 2291 mingo 20 0
On Thu, 2007-08-23 at 14:13 +0200, Ingo Molnar wrote:
> * Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> > [...] So how about the patch below instead?
>
> the right patch attached.
>
> >
> Subject: sched: fix broken SMT/MC optimizations
> From: "Siddha, Suresh B" <[EM
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> [...] So how about the patch below instead?
the right patch attached.
>
Subject: sched: fix broken SMT/MC optimizations
From: "Siddha, Suresh B" <[EMAIL PROTECTED]>
On a four package system with HT - HT load balancing o
* Siddha, Suresh B <[EMAIL PROTECTED]> wrote:
>* a think about bumping its value to force at least one task to be
>* moved
>*/
> - if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task/2) {
> + if (*imbalance < busiest_load_per_task) {
> unsi
* Siddha, Suresh B <[EMAIL PROTECTED]> wrote:
> Ingo, let me know if there any side effects of this change. Thanks.
> ---
>
> On a four package system with HT - HT load balancing optimizations
> were broken. For example, if two tasks end up running on two logical
> threads of one of the packag
13 matches
Mail list logo