Mike Galbraith wrote:
> Hi,
> 
> On Thu, 2008-02-21 at 15:01 +0530, Balbir Singh wrote:
>> Ingo Molnar wrote:
>>> * Balbir Singh <[EMAIL PROTECTED]> wrote:
>>>
>>>> If you insist that sched_yield() is bad, I might agree, but how does 
>>>> my patch make things worse. [...]
>>> it puts new instructions into the hotpath.
>>>
>>>> [...] In my benchmarks, it has helped the sched_yield case, why is 
>>>> that bad? [...]
>>> I had the same cache for the rightmost task in earlier CFS (it's a 
>>> really obvious thing) but removed it. It wasnt a bad idea, but it hurt 
>>> the fastpath hence i removed it. Algorithms and implementations are a 
>>> constant balancing act.
>> This is more convincing, was the code ever in git? How did you measure the
>> overhead?
> 
> Counting enqueue/dequeue cycles on my 3GHz P4/HT running a 60 seconds
> netperf test that does ~85k/s context switches  shows:
> 
> sched_cycles: 7198444348 unpatched
> vs
> sched_cycles: 8574036268 patched


Thanks for the numbers! I am very convinced that the patch should stay out until
we can find a way to reduce the overhead. I'll try your patch and see what the
numbers look like as well.

-- 
        Warm Regards,
        Balbir Singh
        Linux Technology Center
        IBM, ISTL
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to