On Thu, Jan 24, 2013 at 11:16 PM, Alex Shi wrote:
> On 01/24/2013 06:08 PM, Ingo Molnar wrote:
>>
>> * Alex Shi wrote:
>>
>>> @@ -2539,7 +2539,11 @@ static void __update_cpu_load(struct rq *this_rq,
>>> unsigned long this_load,
>>> void update_idle_cpu_load(struct rq *this_rq)
>>> {
>>> u
On 01/24/2013 06:08 PM, Ingo Molnar wrote:
>
> * Alex Shi wrote:
>
>> @@ -2539,7 +2539,11 @@ static void __update_cpu_load(struct rq *this_rq,
>> unsigned long this_load,
>> void update_idle_cpu_load(struct rq *this_rq)
>> {
>> unsigned long curr_jiffies = ACCESS_ONCE(jiffies);
>> +#if d
* Alex Shi wrote:
> @@ -2539,7 +2539,11 @@ static void __update_cpu_load(struct rq *this_rq,
> unsigned long this_load,
> void update_idle_cpu_load(struct rq *this_rq)
> {
> unsigned long curr_jiffies = ACCESS_ONCE(jiffies);
> +#if defined(CONFIG_SMP) && defined(CONFIG_FAIR_GROUP_SCHED)
They are the base values in load balance, update them with rq runnable
load average, then the load balance will consider runnable load avg
naturally.
Signed-off-by: Alex Shi
---
kernel/sched/core.c |8
kernel/sched/fair.c |4 ++--
2 files changed, 10 insertions(+), 2 deletions(-
4 matches
Mail list logo