On 02/20/2013 11:22 PM, Peter Zijlstra wrote:
> On Wed, 2013-02-20 at 22:33 +0800, Alex Shi wrote:
>>> You don't actually compute the rq utilization, you only compute the
>>> utilization as per the fair class, so if there's significant RT
>> activity
>>> it'll think the cpu is under-utilized, whihc
On 02/20/2013 11:22 PM, Peter Zijlstra wrote:
> On Wed, 2013-02-20 at 22:33 +0800, Alex Shi wrote:
>>> You don't actually compute the rq utilization, you only compute the
>>> utilization as per the fair class, so if there's significant RT
>> activity
>>> it'll think the cpu is under-utilized, whihc
On 02/20/2013 11:20 PM, Peter Zijlstra wrote:
> On Wed, 2013-02-20 at 22:33 +0800, Alex Shi wrote:
>>> There's generally a better value than 100 when using computers..
>> seeing
>>> how 100 is 64+32+4.
>>
>> I didn't find a good example for this. and no idea of your suggestion,
>> would you like to
On Wed, 2013-02-20 at 22:33 +0800, Alex Shi wrote:
> > You don't actually compute the rq utilization, you only compute the
> > utilization as per the fair class, so if there's significant RT
> activity
> > it'll think the cpu is under-utilized, whihc I think will result in
> the
> > wrong thing.
>
On Wed, 2013-02-20 at 22:33 +0800, Alex Shi wrote:
> > There's generally a better value than 100 when using computers..
> seeing
> > how 100 is 64+32+4.
>
> I didn't find a good example for this. and no idea of your suggestion,
> would you like to explain a bit more?
Basically what you're doing e
On 02/20/2013 09:34 PM, Peter Zijlstra wrote:
> On Wed, 2013-02-20 at 17:39 +0530, Preeti U Murthy wrote:
>> Hi,
>>
/*
* This is the main, per-CPU runqueue data structure.
*
@@ -481,6 +484,7 @@ struct rq {
#endif
struct sched_avg avg;
+ unsigned i
On 02/20/2013 05:30 PM, Peter Zijlstra wrote:
> On Mon, 2013-02-18 at 13:07 +0800, Alex Shi wrote:
>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index fcdb21f..b9a34ab 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -1495,8 +1495,12 @@ static void update_cfs
On Wed, 2013-02-20 at 17:39 +0530, Preeti U Murthy wrote:
> Hi,
>
> >> /*
> >> * This is the main, per-CPU runqueue data structure.
> >> *
> >> @@ -481,6 +484,7 @@ struct rq {
> >> #endif
> >>
> >>struct sched_avg avg;
> >> + unsigned int util;
> >> };
> >>
> >> static inline int
On 02/20/2013 08:19 PM, Preeti U Murthy wrote:
> Hi everyone,
>
> On 02/18/2013 10:37 AM, Alex Shi wrote:
>> The cpu's utilization is to measure how busy is the cpu.
>> util = cpu_rq(cpu)->avg.runnable_avg_sum
>> / cpu_rq(cpu)->avg.runnable_avg_period;
>
> Why not cfs_rq->
Hi everyone,
On 02/18/2013 10:37 AM, Alex Shi wrote:
> The cpu's utilization is to measure how busy is the cpu.
> util = cpu_rq(cpu)->avg.runnable_avg_sum
> / cpu_rq(cpu)->avg.runnable_avg_period;
Why not cfs_rq->runnable_load_avg? I am concerned with what is the right
met
Hi,
>> /*
>> * This is the main, per-CPU runqueue data structure.
>> *
>> @@ -481,6 +484,7 @@ struct rq {
>> #endif
>>
>> struct sched_avg avg;
>> +unsigned int util;
>> };
>>
>> static inline int cpu_of(struct rq *rq)
>
> You don't actually compute the rq utilization, you on
On Mon, 2013-02-18 at 13:07 +0800, Alex Shi wrote:
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index fcdb21f..b9a34ab 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1495,8 +1495,12 @@ static void update_cfs_rq_blocked_load(struct cfs_rq
> *cfs_rq, int force_up
The cpu's utilization is to measure how busy is the cpu.
util = cpu_rq(cpu)->avg.runnable_avg_sum
/ cpu_rq(cpu)->avg.runnable_avg_period;
Since the util is no more than 1, we use its percentage value in later
caculations. And set the the FULL_UTIL as 100%.
In later power a
13 matches
Mail list logo