On Tue, Dec 11, 2018 at 2:02 AM Mathieu Poirier
wrote:
>
> Good day Adrian,
>
> On Sat, 8 Dec 2018 at 05:05, Lei Wen wrote:
> >
> > Hi Mathieu,
> >
> > I am enabling etmv4 coresight over one Cortex-A7 soc, using 32bit kernel.
> > And I am following [1]
Hi Mathieu,
I am enabling etmv4 coresight over one Cortex-A7 soc, using 32bit kernel.
And I am following [1] to do experiment regarding the addr_range feature.
The default addr_range is set as _stext~_etext, and it works fine with
etb as sink,
and etm as source. I could see there are valid kernel
On Fri, Apr 4, 2014 at 3:02 PM, noman pouigt wrote:
> Hello,
>
> Probably this question belongs to kernelnewbies
> list but I think i will get accurate answer from here.
>
> I am doing some optimization in kernel video driver
> code to reduce the latency from the time buffer
> is given to the time
So that in the very early booting place, we could call timekeeping
code, while it would not cause system panic, since clock is not
init yet.
And for system default clock is always jiffies, so that it shall be
safe to do so.
Signed-off-by: Lei Wen
---
include/linux/time.h | 1 +
init
As people may want to align the kernel log with some other processor
running over the same machine but not the same copy of linux, we
need to keep their log aligned, so that it would not make debug
process hard and confused.
Signed-off-by: Lei Wen
---
kernel/printk/printk.c | 4 ++--
1 file
such assumption in the old days.
So this patch set is supposed to recover such behavior again.
BTW, I am not sure whether we could add additional member in printk
log structure, so that we could print out two piece of log with
one including suspend time, while another not?
Lei Wen (3):
time: create
the old way, get_monotonic_boottime is a good
candidate, but it cannot be called after suspend process has happen.
Thus, it prevent printk to be used in every corner.
Export one warn less __get_monotonic_boottime to solve this issue.
Signed-off-by: Lei Wen
---
include/linux/time.h | 1
Hi Stephen,
On Thu, Apr 3, 2014 at 2:09 AM, Stephen Boyd wrote:
> On 04/02/14 04:02, Lei Wen wrote:
>> Since arm's arch_timer's counter would keep accumulated even in the
>> low power mode, including suspend state, it is very suitable to be
>> the persistent cl
d in such corner case.
Signed-off-by: Lei Wen
---
I am not sure whether it is good to add something like
generic_persistent_clock_read in the new added kernel/time/sched_clock.c?
Since from arch timer's perspective, all it need to do is to pick
the suspend period from the place where sched_
On Mon, Feb 24, 2014 at 3:07 PM, Peter Zijlstra wrote:
> On Mon, Feb 24, 2014 at 10:11:05AM +0800, Lei Wen wrote:
>> How about use the API as cpumask_test_and_clear_cpu?
>> Then below one line is enough.
>
> Its more expensive.
>
I see...
No problem for me then.
Acked-b
gned-off-by: Mike Galbraith
> Signed-off-by: Peter Zijlstra
> Cc: Lei Wen
> Link: http://lkml.kernel.org/n/tip-vmme4f49psirp966pklm5...@git.kernel.org
> Signed-off-by: Thomas Gleixner
> Signed-off-by: Ingo Molnar
> ---
> kernel/sched/fair.c | 25 ++---
>
cpu set nohz.idle_cpus_mask in the
first place.
Signed-off-by: Lei Wen
Cc: Peter Zijlstra
Cc: Mike Galbraith
---
Much thanks to Mike Pointing out the root span would be merged when the
last cpu becomes isolated from the crash result checking!
kernel/sched/fair.c | 8
1 file changed, 8
Mike,
On Fri, Feb 21, 2014 at 1:51 PM, Mike Galbraith wrote:
> On Fri, 2014-02-21 at 10:23 +0800, Lei Wen wrote:
>> Cpu which is put into quiescent mode, would remove itself
>> from kernel's sched_domain, and want others not disturb its
>> task running. But current sc
it by preventing such cpu set nohz.idle_cpus_mask in the
first place.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 235cfa7..66194fc 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/f
it by preventing such cpu set nohz.idle_cpus_mask in the
first place.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 235cfa7..bc85022 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/f
On Thu, Feb 20, 2014 at 4:50 PM, Peter Zijlstra wrote:
> On Thu, Feb 20, 2014 at 10:42:51AM +0800, Lei Wen wrote:
>> >> - int ilb = cpumask_first(nohz.idle_cpus_mask);
>> >> + int ilb;
>> >> + int cpu = smp_processor_id();
>> >> +
On Wed, Feb 19, 2014 at 5:04 PM, Peter Zijlstra wrote:
> On Wed, Feb 19, 2014 at 01:20:30PM +0800, Lei Wen wrote:
>> Since cpu which is put into quiescent mode, would remove itself
>> from kernel's sched_domain. So we could use search sched_domain
>> method to check whet
Since cpu which is put into quiescent mode, would remove itself
from kernel's sched_domain. So we could use search sched_domain
method to check whether this cpu don't want to be disturbed as
idle load balance would send IPI to it.
Signed-off-by: Lei Wen
---
kernel/sched/f
On Wed, Jan 22, 2014 at 10:07 PM, Thomas Gleixner wrote:
> On Wed, 22 Jan 2014, Lei Wen wrote:
>> Recently I want to do the experiment for cpu isolation over 3.10 kernel.
>> But I find the isolated one is periodically waken up by IPI interrupt.
>>
>> By checking the
Hi Thomas,
Recently I want to do the experiment for cpu isolation over 3.10 kernel.
But I find the isolated one is periodically waken up by IPI interrupt.
By checking the trace, I find those IPI is generated by add_timer_on,
which would calls wake_up_nohz_cpu, and wake up the already idle cpu.
W
On Mon, Jan 20, 2014 at 11:41 PM, Frederic Weisbecker
wrote:
> On Mon, Jan 20, 2014 at 08:30:10PM +0530, Viresh Kumar wrote:
>> On 20 January 2014 19:29, Lei Wen wrote:
>> > Hi Viresh,
>>
>> Hi Lei,
>>
>> > I have one question regarding unbounded w
Hi Viresh,
On Wed, Jan 15, 2014 at 5:27 PM, Viresh Kumar wrote:
> Hi Again,
>
> I am now successful in isolating a CPU completely using CPUsets,
> NO_HZ_FULL and CPU hotplug..
>
> My setup and requirements for those who weren't following the
> earlier mails:
>
> For networking machines it is requ
Hi Mike,
On Mon, Dec 30, 2013 at 12:08 PM, Mike Galbraith wrote:
> On Mon, 2013-12-30 at 11:14 +0800, Lei Wen wrote:
>> Since we would update rq clock at task enqueue/dequeue, or schedule
>> tick. If we don't update the rq clock when our previous task get
>> preempted,
want more precise account for the task start and duration time,
we'd better ensure rq clock get updated when it begin to run.
Best regards,
Lei
Signed-off-by: Lei Wen
---
kernel/sched/core.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/ke
On Mon, Sep 9, 2013 at 7:15 PM, Peter Zijlstra wrote:
> On Mon, Sep 02, 2013 at 02:26:45PM +0800, Lei Wen wrote:
>> Hi Peter,
>>
>> I find one list API usage may not be correct in current fair.c code.
>> In move_one_task function, it may iterate through whole cfs_tasks
Hi Peter,
I find one list API usage may not be correct in current fair.c code.
In move_one_task function, it may iterate through whole cfs_tasks
list to get one task to move.
But in dequeue_task(), it would delete one task node from list
without the lock protection. So that we could see from
list
On Mon, Aug 26, 2013 at 12:36 PM, Paul Turner wrote:
> On Sun, Aug 25, 2013 at 7:56 PM, Lei Wen wrote:
>> On Tue, Aug 20, 2013 at 12:01 AM, Peter Zijlstra
>> wrote:
>>> From: Joonsoo Kim
>>>
>>> There is no reason to maintain separate variabl
On Tue, Aug 20, 2013 at 12:01 AM, Peter Zijlstra wrote:
> From: Joonsoo Kim
>
> There is no reason to maintain separate variables for this_group
> and busiest_group in sd_lb_stat, except saving some space.
> But this structure is always allocated in stack, so this saving
> isn't really benificial
Paul,
On Tue, Aug 13, 2013 at 5:25 PM, Paul Turner wrote:
> On Tue, Aug 13, 2013 at 1:18 AM, Lei Wen wrote:
>> Hi Paul,
>>
>> On Tue, Aug 13, 2013 at 4:08 PM, Paul Turner wrote:
>>> On Tue, Aug 13, 2013 at 12:38 AM, Peter Zijlstra
>>> wrote:
>>>
Signed-off-by: Lei Wen
---
kernel/sched/sched.h |6 ++
1 files changed, 6 insertions(+), 0 deletions(-)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index ef0a7b2..b8f0924 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -248,6 +248,12 @@ struct cfs_bandwidth
Since it is different for the nr_running and h_nr_running in its
presenting meaning, we should take care of their usage in the scheduler.
Lei Wen (8):
sched: change load balance number to h_nr_running of run queue
sched: change cpu_avg_load_per_task using h_nr_running
sched: change
ove.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3656603..4c96124 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5349,7 +5349,7 @@ static
Since find_busiest_queue try to avoid do load balance for runqueue
which has only one cfs task and its load is above the imbalance
value calculated, we should use h_nr_running of cfs instead of
nr_running of rq.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c |3 ++-
1 files changed, 2
Since update_sg_lb_stats is used to calculate sched_group load
difference of cfs type task, it should use h_nr_running instead of
nr_running of rq.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/sched/fair.c b
Since pick_next_task_fair only want to ensure there is some task in the
run queue to be picked up, it should use the h_nr_running instead of
nr_running, since nr_running cannot present all tasks if group existed.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c |2 +-
1 files changed, 1
Since cpu_avg_load_per_task is used only by cfs scheduler, its meaning
should present the average cfs type task load in the current run queue.
Thus we change it to h_nr_running for well presenting its meaning.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c |2 +-
1 files changed, 1
control
mechanism. Thus its sleep time should not being taken into
runnable avg load calculation.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e6b99b4..9869d4d 100644
--- a
gned-off-by: Lei Wen
---
kernel/sched/fair.c |8 +---
1 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f918635..d6153c8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5096,17 +5096,19 @@ redo:
schedsta
Hi Paul,
On Tue, Aug 13, 2013 at 4:08 PM, Paul Turner wrote:
> On Tue, Aug 13, 2013 at 12:38 AM, Peter Zijlstra wrote:
>> On Tue, Aug 13, 2013 at 12:45:12PM +0800, Lei Wen wrote:
>>> > Not quite right; I think you need busiest->cfs.h_nr_running.
>>> > cfs
Peter,
On Mon, Aug 12, 2013 at 10:43 PM, Peter Zijlstra wrote:
> On Tue, Aug 06, 2013 at 09:23:46PM +0800, Lei Wen wrote:
>> Hi Paul,
>>
>> I notice in load_balance function, it would check busiest->nr_running
>> to decide whether to perform the real task movement.
Hi Paul,
I notice in load_balance function, it would check busiest->nr_running
to decide whether to perform the real task movement.
But in some case, I saw the nr_running is not matching with
the task in the queue, which seems make scheduler to do many redundant
checking.
What I means is like the
Hi list,
I recently find a strange issue over 3.4 kernel.
The scenario is doing the hotplug test over ARM platfrom, and when the
hotplugged
out cpu1 want to get back in again, seems it stuck at cpu_stop_cpu_callback.
The task backtrace is as below:
PID: 21749 TASK: d194b300 CPU: 0 COMMAND: "k
Hi Peter,
Do you have some further suggestion for this patch? :)
Thanks,
Lei
On Tue, Jul 2, 2013 at 8:15 PM, Lei Wen wrote:
> Since we could track task in the entity level now, we may want to
> investigate tasks' running status by recording the trace info, so that
> could make
eople
may get confused.
Signed-off-by: Lei Wen
Cc: Alex Shi
Cc: Paul Turner
---
kernel/sched/fair.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2290469..53224d1 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/f
Since we could track task in the entity level now, we may want to
investigate tasks' running status by recording the trace info, so that
could make some tuning if needed.
Signed-off-by: Lei Wen
Cc: Alex Shi
Cc: Peter Zijlstra
Cc: Kamalesh Babulal
---
include/trace/events/sched.h |
ake trace events passing parameter being simple, and only extend
its detail in the header file definition. Thanks Peter for pointing out
this.
V2: Abstract sched_cfs_rq_runnable_load and sched_cfs_rq_blocked_load using
sched_cfs_rq_load_contri_template. Thanks Kamalesh for this contribut
Paul,
On Mon, Jul 1, 2013 at 10:07 PM, Paul Turner wrote:
> Could you please restate the below?
>
> On Mon, Jul 1, 2013 at 5:33 AM, Lei Wen wrote:
>> Since we are going to calculate cfs_rq's average ratio by
>> runnable_load_avg/load.weight
>
> I don
Hi Peter,
On Mon, Jul 1, 2013 at 8:44 PM, Peter Zijlstra wrote:
> On Mon, Jul 01, 2013 at 08:33:21PM +0800, Lei Wen wrote:
>> Since we could track task in the entity level now, we may want to
>> investigate tasks' running status by recording the trace info, so that
>>
Since we are going to calculate cfs_rq's average ratio by
runnable_load_avg/load.weight, if not increase the load.weight prior to
enqueue_entity_load_avg, it may lead to one cfs_rq's avg ratio higher
than 100%.
Adjust the sequence, so that all ratio is kept below 100%.
Signed-off-b
oad distribution status in the whole system
V2: Abstract sched_cfs_rq_runnable_load and sched_cfs_rq_blocked_load using
sched_cfs_rq_load_contri_template. Thanks Kamalesh for this contribution!
Lei Wen (2):
sched: add trace events for task and rq usage tracking
sched: update cfs_rq w
Since we could track task in the entity level now, we may want to
investigate tasks' running status by recording the trace info, so that
could make some tuning if needed.
Signed-off-by: Lei Wen
---
include/trace/events/sched.h | 57 ++
kernel/
Hi Kamalesh,
On Mon, Jul 1, 2013 at 5:43 PM, Kamalesh Babulal
wrote:
> * Lei Wen [2013-07-01 15:10:32]:
>
>> Since we could track task in the entity level now, we may want to
>> investigate tasks' running status by recording the trace info, so that
>> cou
Alex,
On Mon, Jul 1, 2013 at 4:06 PM, Alex Shi wrote:
> On 07/01/2013 03:10 PM, Lei Wen wrote:
>> Thanks for the per-entity tracking feature, we could know the details of
>> each task by its help.
>> This patch add its trace support, so that we could quickly know the system
Since we are going to calculate cfs_rq's average ratio by
runnable_load_avg/load.weight, if not increase the load.weight prior to
enqueue_entity_load_avg, it may lead to one cfs_rq's avg ratio higher
than 100%.
Adjust the sequence, so that all ratio is kept below 100%.
Signed-off-b
o = cfs_rq->runnable_load_avg/cfs_rq->load.weight
Lei Wen (2):
sched: add trace events for task and rq usage tracking
sched: update cfs_rq weight earlier in enqueue_entity
include/trace/events/sched.h | 73 ++
kernel/sched/fair.c | 31 +++
Since we could track task in the entity level now, we may want to
investigate tasks' running status by recording the trace info, so that
could make some tuning if needed.
Signed-off-by: Lei Wen
---
include/trace/events/sched.h | 73 ++
kernel/
Hi Michael,
On Mon, Jun 17, 2013 at 2:44 PM, Michael Wang
wrote:
> On 06/17/2013 01:08 PM, Lei Wen wrote:
>> Hi Michael,
>>
>> On Mon, Jun 17, 2013 at 11:27 AM, Michael Wang
>> wrote:
>>> Hi, Lei
>>>
>>> On 06/17/2013 10:21 AM, Lei Wen w
On Fri, Jun 21, 2013 at 7:09 PM, Alex Shi wrote:
>
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c61a614..9640c66 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5856,7 +5856,8 @@ static void switched_from_fair(struct rq *rq,
struct task_
Alex,
On Fri, Jun 21, 2013 at 4:56 PM, Alex Shi wrote:
> On 06/21/2013 10:50 AM, Lei Wen wrote:
>> I see your point... I made the mistake that update the wrong patch...
>> Please help check this one.
>>
>> commit 5fc3d5c74f8359ef382d9a20ffe657ffc237c109
>> Autho
Morten,
On Thu, Jun 20, 2013 at 6:23 PM, Morten Rasmussen
wrote:
> On Sat, Jun 15, 2013 at 01:09:12PM +0100, Lei Wen wrote:
>> On Fri, Jun 14, 2013 at 9:59 PM, Alex Shi wrote:
>> > On 06/14/2013 06:02 PM, Lei Wen wrote:
>> >
Alex,
On Fri, Jun 21, 2013 at 10:39 AM, Alex Shi wrote:
> On 06/21/2013 10:30 AM, Lei Wen wrote:
>> Hi Alex,
>>
>> On Thu, Jun 20, 2013 at 10:59 PM, Alex Shi wrote:
>>> On 06/20/2013 10:46 AM, Lei Wen wrote:
>>>>
>>>>
>
Hi Alex,
On Thu, Jun 20, 2013 at 10:59 PM, Alex Shi wrote:
> On 06/20/2013 10:46 AM, Lei Wen wrote:
>>
>>
>> But here I have a question, there is another usage of
>> __synchronzie_entity_decay
>> in current kernel, in the switched_from_fair function.
>&g
On Thu, Jun 20, 2013 at 9:43 AM, Lei Wen wrote:
> Hi Alex,
>
> On Mon, Jun 17, 2013 at 11:41 PM, Alex Shi wrote:
>> On 06/17/2013 07:51 PM, Paul Turner wrote:
>>> Can you add something like:
>>>
>>> + /*
>>> +
Hi Alex,
On Mon, Jun 17, 2013 at 11:41 PM, Alex Shi wrote:
> On 06/17/2013 07:51 PM, Paul Turner wrote:
>> Can you add something like:
>>
>> + /*
>> +* Task re-woke on same cpu (or else
>> migrate_task_rq_fair()
>> +* would have made count negative);
Hi Peter,
On Tue, Jun 18, 2013 at 5:55 PM, Peter Zijlstra wrote:
> On Sun, Jun 09, 2013 at 11:59:36PM +0800, Lei Wen wrote:
>> Hi Peter,
>>
>> While I am checking the preempt related code, I find a interesting part.
>> That is when preempt_schedule is called, for
Actually all below item could be repalced by scaled_busy_load_per_task
(sds->busiest_load_per_task * SCHED_POWER_SCALE)
/sds->busiest->sgp->power;
Signed-off-by: Lei Wen
---
kernel/sched/fair.c | 19 ---
1 file changed, 8 insertions(+),
be cpu power gain
in move the load.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c | 55 +++
1 file changed, 38 insertions(+), 17 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 28052fa..fd9cbee 100644
--- a/kernel/sched/
>, to stop this loop.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c478022..3be7844 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4717,7 +4717,7 @@ void fix
e:
if ((max_cpu_load - min_cpu_load) >= avg_load_per_task &&
(max_nr_running - min_nr_running) > 1)
It makes (512-128)>=((512+128)/4), and lead to imbalance conclusion...
Make the load as scaled, to avoid such case.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c |
sds->avg_load when there is not balanced group inside the domain
V4: env->imbalance should be applied with not scaled value, fix according to it.
Fix one ping-pong adjustment for special case.
Lei Wen (4):
sched: reduce calculation effort in fix_small_imbalance
sched: scale the busy
busy_load_per_task >=
(scaled_busy_load_per_task * imbn)
This would make load balance happen, even the busiest group's load is
less than local group's load...
Signed-off-by: Lei Wen
---
kernel/sched/fair.c |6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fai
Apparently we don't want to see sds->busiest_nr_running is small than
sds->busiest_group_capacity, while our load_above_capacity is
an "unsigned long" type.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
e:
if ((max_cpu_load - min_cpu_load) >= avg_load_per_task &&
(max_nr_running - min_nr_running) > 1)
It makes (512-128)>=((512+128)/4), and lead to imbalance conclusion...
Make the load as scaled, to avoid such case.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c |
Actually all below item could be repalced by scaled_busy_load_per_task
(sds->busiest_load_per_task * SCHED_POWER_SCALE)
/sds->busiest->sgp->power;
Signed-off-by: Lei Wen
---
kernel/sched/fair.c | 19 ---
1 file changed, 8 insertions(+),
sds->avg_load when there is not balanced group inside the domain
Lei Wen (3):
sched: reduce calculation effort in fix_small_imbalance
sched: scale the busy and this queue's per-task load before compare
sched: scale cpu load for judgment of group imbalance
kernel/sched/fa
be cpu power gain
in move the load.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c | 38 +-
1 file changed, 25 insertions(+), 13 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 28052fa..6173095 100644
--- a/kernel/sched/fair.c
+++ b/
e:
if ((max_cpu_load - min_cpu_load) >= avg_load_per_task &&
(max_nr_running - min_nr_running) > 1)
It makes (512-128)>=((512+128)/4), and lead to imbalance conclusion...
Make the load as scaled, to avoid such case.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c |
Actually all below item could be repalced by scaled_busy_load_per_task
(sds->busiest_load_per_task * SCHED_POWER_SCALE)
/sds->busiest->sgp->power;
Signed-off-by: Lei Wen
---
kernel/sched/fair.c | 19 ---
1 file changed, 8 insertions(+),
be cpu power gain
in move the load.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c | 28 +++-
1 file changed, 19 insertions(+), 9 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 28052fa..77a149c 100644
--- a/kernel/sched/fair.c
+++ b/kernel
Here are three patches which correct scale usage in both fix_small_imbalance
and update_sg_lb_stats.
And give out comment over when fix_small_imbalance would cause load change.
V2: fix scale usage for update_sg_lb_stats
Lei Wen (3):
sched: reduce calculation effort in fix_small_imbalance
Hi Michael,
On Mon, Jun 17, 2013 at 2:44 PM, Michael Wang
wrote:
> On 06/17/2013 01:08 PM, Lei Wen wrote:
>> Hi Michael,
>>
>> On Mon, Jun 17, 2013 at 11:27 AM, Michael Wang
>> wrote:
>>> Hi, Lei
>>>
>>> On 06/17/2013 10:21 AM, Lei Wen w
Hi Peter,
On Mon, Jun 17, 2013 at 5:20 PM, Peter Zijlstra wrote:
> On Fri, Jun 14, 2013 at 06:02:45PM +0800, Lei Wen wrote:
>> Hi Alex,
>>
>> On Fri, Jun 7, 2013 at 3:20 PM, Alex Shi wrote:
>> > We need initialize the se.avg.{decay_count, load_avg_contr
Hi Michael,
On Mon, Jun 17, 2013 at 11:27 AM, Michael Wang
wrote:
> Hi, Lei
>
> On 06/17/2013 10:21 AM, Lei Wen wrote:
>> nr_busy_cpus in sched_group_power structure cannot present the purpose
>> for judging below statement:
>> "this cpu's scheduler gr
ginal purpose to add this logic still looks good.
So we move this kind of logic to find_new_ilb, so that we could pick
out peer from our sharing resource domain whenever possible.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c | 28 ++--
1 file changed, 22 insertions(+), 6
On Fri, Jun 14, 2013 at 9:59 PM, Alex Shi wrote:
> On 06/14/2013 06:02 PM, Lei Wen wrote:
>>> > enqueue_entity
>>> > enqueue_entity_load_avg
>>> >
>>> > and make forking balancing imbalance since incorrect load_avg_contrib
Hi Alex,
On Fri, Jun 7, 2013 at 3:20 PM, Alex Shi wrote:
> We need initialize the se.avg.{decay_count, load_avg_contrib} for a
> new forked task.
> Otherwise random values of above variables cause mess when do new task
> enqueue:
> enqueue_task_fair
> enqueue_entity
> enqu
Actually all below item could be repalced by scaled_busy_load_per_task
(sds->busiest_load_per_task * SCHED_POWER_SCALE)
/sds->busiest->sgp->power;
Signed-off-by: Lei Wen
---
kernel/sched/fair.c | 19 ---
1 file changed, 8 insertions(+),
Here is two patches which correct the scale usage in the fix_small_balance,
and give out comment over when fix_small_imbalance would cause load change.
Lei Wen (2):
sched: reduce calculation effort in fix_small_imbalance
sched: scale the busy and this queue's per-task load before co
be cpu power gain
in move the load.
Signed-off-by: Lei Wen
---
kernel/sched/fair.c | 28 +++-
1 file changed, 19 insertions(+), 9 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 28052fa..77a149c 100644
--- a/kernel/sched/fair.c
+++ b/kernel
Hi Peter,
While I am checking the preempt related code, I find a interesting part.
That is when preempt_schedule is called, for its preempt_count be added
PREEMPT_ACTIVE, so in __schedule() it could not be dequeued from rq
by deactivate_task.
Thus in put_prev_task, which is called a little later
On Tue, Mar 12, 2013 at 2:13 PM, Tejun Heo wrote:
> On Tue, Mar 12, 2013 at 02:01:16PM +0800, Lei Wen wrote:
>> I see...
>> How about only check those workqueue structure not on stack?
>> For current onstack usage is rare, and should be easier to check with.
>
> No, k
On Tue, Mar 12, 2013 at 1:40 PM, Tejun Heo wrote:
> On Tue, Mar 12, 2013 at 01:34:56PM +0800, Lei Wen wrote:
>> > Memory areas aren't always zero on allocation.
>>
>> Shouldn't work structure be allocated with kzalloc?
>
> It's not required to.
On Tue, Mar 12, 2013 at 1:24 PM, Tejun Heo wrote:
> On Tue, Mar 12, 2013 at 01:18:01PM +0800, Lei Wen wrote:
>> > You're initializing random piece of memory which may contain any
>> > garbage and triggering BUG if some bit is set on it. No, you can't do
>> &
Tejun,
On Tue, Mar 12, 2013 at 1:12 PM, Tejun Heo wrote:
> Hello,
>
> On Tue, Mar 12, 2013 at 01:08:15PM +0800, Lei Wen wrote:
>> diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
>> index 8afab27..425d5a2 100644
>> --- a/include/linux/workque
Tejun,
On Thu, Mar 7, 2013 at 9:15 AM, Lei Wen wrote:
> Hi Tejun,
>
> On Thu, Mar 7, 2013 at 3:14 AM, Tejun Heo wrote:
>> Hello, Lei.
>>
>> On Wed, Mar 06, 2013 at 10:39:15PM +0800, Lei Wen wrote:
>>> We find
Hi Tejun,
On Thu, Mar 7, 2013 at 3:14 AM, Tejun Heo wrote:
> Hello, Lei.
>
> On Wed, Mar 06, 2013 at 10:39:15PM +0800, Lei Wen wrote:
>> We find a race condition as below:
>> CPU0 CPU1
>> timer inter
Hi Tejun
On Wed, Mar 6, 2013 at 12:32 AM, Tejun Heo wrote:
> Hello,
>
> On Tue, Mar 05, 2013 at 03:31:45PM +0800, Lei Wen wrote:
>> With checking memory, we find work->data becomes 0x300, when it try
>> to call get_work_cwq
>
> Why would that become 0x300? Who
Hi Tejun,
We met one panic issue related workqueue based over 3.4.5 Linux kernel.
Panic log as:
[153587.035369] Unable to handle kernel NULL pointer dereference at
virtual address 0004
[153587.043731] pgd = e1e74000
[153587.046691] [0004] *pgd=
[153587.050567] Internal error: Oops
98 matches
Mail list logo