On 01/30/2013 04:35 PM, Lukasz Majewski wrote:
> Dear All,
>
>
> I'd like to ask about the power aware scheduler development:
>
> https://blueprints.launchpad.net/linaro-power-kernel/+spec/power-aware-
> scheduler
>
The latest code released in LKML:
https://lkml.org/lkml/2013/1/23/620
Commen
On 01/24/2013 01:15 PM, Viresh Kumar wrote:
> On 24 January 2013 09:00, Alex Shi wrote:
>> This patchset can be used, but causes burst waking benchmark aim9 drop 5~7%
>> on my 2 sockets machine. The reason is too light runnable load in early stage
>> of waked tasks cause i
>>
>> Maybe we can skip local group since it's a bottom-up search so we know
>> there's no idle cpu in the lower domain from the prior iteration.
>>
>
> I did this change but seems results are worse on my machines, guess start
> seeking idle cpu bottom up is a bad idea.
> The following is full v
gt; regular load balance?
no this patch.
>
> On 01/20/2013 09:22 PM, Alex Shi wrote:
>>>>> The blocked load of a cluster will be high if the blocked tasks have
>>>>> run recently. The contribution of a blocked task will be divided by 2
>>>>> each 3
>>> The blocked load of a cluster will be high if the blocked tasks have
>>> run recently. The contribution of a blocked task will be divided by 2
>>> each 32ms, so it means that a high blocked load will be made of recent
>>> running tasks and the long sleeping tasks will not influence the load
>>>
On 01/09/2013 11:14 AM, Preeti U Murthy wrote:
> Here comes the point of making both load balancing and wake up
> balance(select_idle_sibling) co operative. How about we always schedule
> the woken up task on the prev_cpu? This seems more sensible considering
> load balancing consid
On 01/17/2013 01:17 PM, Namhyung Kim wrote:
> On Wed, 16 Jan 2013 22:08:21 +0800, Alex Shi wrote:
>> On 01/08/2013 04:41 PM, Preeti U Murthy wrote:
>>> Hi Mike,
>>>
>>> Thank you very much for such a clear and comprehensive explanation.
>>> So whe
tch as following. hackbench/aim9 doest show clean
performance change.
Actually we can get some profit. it also will be very slight. :)
BTW, it still need another patch before apply this. Just to show the logical.
===
>From 145ff27744c8ac04eda056739fe5aa907a00877e Mon Sep 17 00:00:00 2001
On Tue, Dec 18, 2012 at 5:53 PM, Vincent Guittot
wrote:
> On 17 December 2012 16:24, Alex Shi wrote:
>>>>>>>> The scheme below tries to summaries the idea:
>>>>>>>>
>>>>>>>> Socket | socket 0 | socket 1 | socket 2
>> The scheme below tries to summaries the idea:
>>
>> Socket | socket 0 | socket 1 | socket 2 | socket 3 |
>> LCPU| 0 | 1-15 | 16 | 17-31 | 32 | 33-47 | 48 | 49-63 |
>> buddy conf0 | 0 | 0| 1 | 16| 2 | 32| 3 | 48|
>> buddy conf1 | 0 | 0
ter to be removed.
>From 96bee9a03b2048f2686fbd7de0e2aee458dbd917 Mon Sep 17 00:00:00 2001
From: Alex Shi
Date: Mon, 17 Dec 2012 09:42:57 +0800
Subject: [PATCH 01/18] sched: remove SD_PERFER_SIBLING flag
The flag was introduced in commit b5d978e0c7e79a. Its purpose seems
trying to
On 12/14/2012 05:33 PM, Vincent Guittot wrote:
> On 14 December 2012 02:46, Alex Shi wrote:
>> On 12/13/2012 11:48 PM, Vincent Guittot wrote:
>>> On 13 December 2012 15:53, Vincent Guittot
>>> wrote:
>>>> On 13 December 2012 15:25, Alex Shi wrote:
>&g
On 12/14/2012 03:45 PM, Mike Galbraith wrote:
> On Fri, 2012-12-14 at 14:36 +0800, Alex Shi wrote:
>> On 12/14/2012 12:45 PM, Mike Galbraith wrote:
>>>>> Do you have further ideas for buddy cpu on such example?
>>>>>>>
>>>>>>>
On 12/14/2012 12:45 PM, Mike Galbraith wrote:
>> > Do you have further ideas for buddy cpu on such example?
>>> > >
>>> > > Which kind of sched_domain configuration have you for such system ?
>>> > > and how many sched_domain level have you ?
>> >
>> > it is general X86 domain configuration. with
On 12/13/2012 11:48 PM, Vincent Guittot wrote:
> On 13 December 2012 15:53, Vincent Guittot wrote:
>> On 13 December 2012 15:25, Alex Shi wrote:
>>> On 12/13/2012 06:11 PM, Vincent Guittot wrote:
>>>> On 13 December 2012 03:17, Alex Shi wrote:
>>>>>
On 12/13/2012 06:11 PM, Vincent Guittot wrote:
> On 13 December 2012 03:17, Alex Shi wrote:
>> On 12/12/2012 09:31 PM, Vincent Guittot wrote:
>>> During the creation of sched_domain, we define a pack buddy CPU for each CPU
>>> when one is available. We want to pack at
On 12/13/2012 10:17 AM, Alex Shi wrote:
> On 12/12/2012 09:31 PM, Vincent Guittot wrote:
>> During the creation of sched_domain, we define a pack buddy CPU for each CPU
>> when one is available. We want to pack at all levels where a group of CPU can
>> be power gated inde
On 12/12/2012 09:31 PM, Vincent Guittot wrote:
> This new flag SD_SHARE_POWERDOMAIN is used to reflect whether groups of CPU in
> a sched_domain level can or not reach a different power state. If clusters can
> be power gated independently, as an example, the flag should be cleared at CPU
> level.
On 12/12/2012 09:31 PM, Vincent Guittot wrote:
> During the creation of sched_domain, we define a pack buddy CPU for each CPU
> when one is available. We want to pack at all levels where a group of CPU can
> be power gated independently from others.
> On a system that can't power gate a group of CP
19 matches
Mail list logo