Re: Coresight etmv4 enable over 32bit kernel

2018-12-11 Thread Lei Wen
On Tue, Dec 11, 2018 at 2:02 AM Mathieu Poirier wrote: > > Good day Adrian, > > On Sat, 8 Dec 2018 at 05:05, Lei Wen wrote: > > > > Hi Mathieu, > > > > I am enabling etmv4 coresight over one Cortex-A7 soc, using 32bit kernel. > > And I am following [1]

Coresight etmv4 enable over 32bit kernel

2018-12-08 Thread Lei Wen
Hi Mathieu, I am enabling etmv4 coresight over one Cortex-A7 soc, using 32bit kernel. And I am following [1] to do experiment regarding the addr_range feature. The default addr_range is set as _stext~_etext, and it works fine with etb as sink, and etm as source. I could see there are valid kernel

Re: Query for time tracking between userspace and kernelspace

2014-04-04 Thread Lei Wen
On Fri, Apr 4, 2014 at 3:02 PM, noman pouigt wrote: > Hello, > > Probably this question belongs to kernelnewbies > list but I think i will get accurate answer from here. > > I am doing some optimization in kernel video driver > code to reduce the latency from the time buffer > is given to the time

[PATCH 2/3] timekeeping: move clocksource init to the early place

2014-04-03 Thread Lei Wen
So that in the very early booting place, we could call timekeeping code, while it would not cause system panic, since clock is not init yet. And for system default clock is always jiffies, so that it shall be safe to do so. Signed-off-by: Lei Wen --- include/linux/time.h | 1 + init

[PATCH 3/3] printk: using booting time as the timestamp

2014-04-03 Thread Lei Wen
As people may want to align the kernel log with some other processor running over the same machine but not the same copy of linux, we need to keep their log aligned, so that it would not make debug process hard and confused. Signed-off-by: Lei Wen --- kernel/printk/printk.c | 4 ++-- 1 file

[PATCH 0/3] switch printk timestamp to use booting time

2014-04-03 Thread Lei Wen
such assumption in the old days. So this patch set is supposed to recover such behavior again. BTW, I am not sure whether we could add additional member in printk log structure, so that we could print out two piece of log with one including suspend time, while another not? Lei Wen (3): time: create

[PATCH 1/3] time: create __get_monotonic_boottime for WARNless calls

2014-04-03 Thread Lei Wen
the old way, get_monotonic_boottime is a good candidate, but it cannot be called after suspend process has happen. Thus, it prevent printk to be used in every corner. Export one warn less __get_monotonic_boottime to solve this issue. Signed-off-by: Lei Wen --- include/linux/time.h | 1

Re: [PATCH] clocksource: register persistent clock for arm arch_timer

2014-04-02 Thread Lei Wen
Hi Stephen, On Thu, Apr 3, 2014 at 2:09 AM, Stephen Boyd wrote: > On 04/02/14 04:02, Lei Wen wrote: >> Since arm's arch_timer's counter would keep accumulated even in the >> low power mode, including suspend state, it is very suitable to be >> the persistent cl

[PATCH] clocksource: register persistent clock for arm arch_timer

2014-04-02 Thread Lei Wen
d in such corner case. Signed-off-by: Lei Wen --- I am not sure whether it is good to add something like generic_persistent_clock_read in the new added kernel/time/sched_clock.c? Since from arch timer's perspective, all it need to do is to pick the suspend period from the place where sched_

Re: [tip:sched/core] sched, nohz: Exclude isolated cores from load balancing

2014-02-23 Thread Lei Wen
On Mon, Feb 24, 2014 at 3:07 PM, Peter Zijlstra wrote: > On Mon, Feb 24, 2014 at 10:11:05AM +0800, Lei Wen wrote: >> How about use the API as cpumask_test_and_clear_cpu? >> Then below one line is enough. > > Its more expensive. > I see... No problem for me then. Acked-b

Re: [tip:sched/core] sched, nohz: Exclude isolated cores from load balancing

2014-02-23 Thread Lei Wen
gned-off-by: Mike Galbraith > Signed-off-by: Peter Zijlstra > Cc: Lei Wen > Link: http://lkml.kernel.org/n/tip-vmme4f49psirp966pklm5...@git.kernel.org > Signed-off-by: Thomas Gleixner > Signed-off-by: Ingo Molnar > --- > kernel/sched/fair.c | 25 ++--- >

[PATCH v3] sched: keep quiescent cpu out of idle balance loop

2014-02-21 Thread Lei Wen
cpu set nohz.idle_cpus_mask in the first place. Signed-off-by: Lei Wen Cc: Peter Zijlstra Cc: Mike Galbraith --- Much thanks to Mike Pointing out the root span would be merged when the last cpu becomes isolated from the crash result checking! kernel/sched/fair.c | 8 1 file changed, 8

Re: [PATCH v2] sched: keep quiescent cpu out of idle balance loop

2014-02-20 Thread Lei Wen
Mike, On Fri, Feb 21, 2014 at 1:51 PM, Mike Galbraith wrote: > On Fri, 2014-02-21 at 10:23 +0800, Lei Wen wrote: >> Cpu which is put into quiescent mode, would remove itself >> from kernel's sched_domain, and want others not disturb its >> task running. But current sc

[PATCH v2] sched: keep quiescent cpu out of idle balance loop

2014-02-20 Thread Lei Wen
it by preventing such cpu set nohz.idle_cpus_mask in the first place. Signed-off-by: Lei Wen --- kernel/sched/fair.c | 7 +++ 1 file changed, 7 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 235cfa7..66194fc 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/f

[PATCH] sched: keep quiescent cpu out of idle balance loop

2014-02-20 Thread Lei Wen
it by preventing such cpu set nohz.idle_cpus_mask in the first place. Signed-off-by: Lei Wen --- kernel/sched/fair.c | 7 +++ 1 file changed, 7 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 235cfa7..bc85022 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/f

Re: [PATCH] sched: keep quiescent cpu out of idle balance loop

2014-02-20 Thread Lei Wen
On Thu, Feb 20, 2014 at 4:50 PM, Peter Zijlstra wrote: > On Thu, Feb 20, 2014 at 10:42:51AM +0800, Lei Wen wrote: >> >> - int ilb = cpumask_first(nohz.idle_cpus_mask); >> >> + int ilb; >> >> + int cpu = smp_processor_id(); >> >> +

Re: [PATCH] sched: keep quiescent cpu out of idle balance loop

2014-02-19 Thread Lei Wen
On Wed, Feb 19, 2014 at 5:04 PM, Peter Zijlstra wrote: > On Wed, Feb 19, 2014 at 01:20:30PM +0800, Lei Wen wrote: >> Since cpu which is put into quiescent mode, would remove itself >> from kernel's sched_domain. So we could use search sched_domain >> method to check whet

[PATCH] sched: keep quiescent cpu out of idle balance loop

2014-02-18 Thread Lei Wen
Since cpu which is put into quiescent mode, would remove itself from kernel's sched_domain. So we could use search sched_domain method to check whether this cpu don't want to be disturbed as idle load balance would send IPI to it. Signed-off-by: Lei Wen --- kernel/sched/f

Re: Is it ok for deferrable timer wakeup the idle cpu?

2014-01-22 Thread Lei Wen
On Wed, Jan 22, 2014 at 10:07 PM, Thomas Gleixner wrote: > On Wed, 22 Jan 2014, Lei Wen wrote: >> Recently I want to do the experiment for cpu isolation over 3.10 kernel. >> But I find the isolated one is periodically waken up by IPI interrupt. >> >> By checking the

Is it ok for deferrable timer wakeup the idle cpu?

2014-01-22 Thread Lei Wen
Hi Thomas, Recently I want to do the experiment for cpu isolation over 3.10 kernel. But I find the isolated one is periodically waken up by IPI interrupt. By checking the trace, I find those IPI is generated by add_timer_on, which would calls wake_up_nohz_cpu, and wake up the already idle cpu. W

Re: [QUERY]: Is using CPU hotplug right for isolating CPUs?

2014-01-20 Thread Lei Wen
On Mon, Jan 20, 2014 at 11:41 PM, Frederic Weisbecker wrote: > On Mon, Jan 20, 2014 at 08:30:10PM +0530, Viresh Kumar wrote: >> On 20 January 2014 19:29, Lei Wen wrote: >> > Hi Viresh, >> >> Hi Lei, >> >> > I have one question regarding unbounded w

Re: [QUERY]: Is using CPU hotplug right for isolating CPUs?

2014-01-20 Thread Lei Wen
Hi Viresh, On Wed, Jan 15, 2014 at 5:27 PM, Viresh Kumar wrote: > Hi Again, > > I am now successful in isolating a CPU completely using CPUsets, > NO_HZ_FULL and CPU hotplug.. > > My setup and requirements for those who weren't following the > earlier mails: > > For networking machines it is requ

Re: [RFC] sched: update rq clock when only get preempt

2013-12-29 Thread Lei Wen
Hi Mike, On Mon, Dec 30, 2013 at 12:08 PM, Mike Galbraith wrote: > On Mon, 2013-12-30 at 11:14 +0800, Lei Wen wrote: >> Since we would update rq clock at task enqueue/dequeue, or schedule >> tick. If we don't update the rq clock when our previous task get >> preempted,

[RFC] sched: update rq clock when only get preempt

2013-12-29 Thread Lei Wen
want more precise account for the task start and duration time, we'd better ensure rq clock get updated when it begin to run. Best regards, Lei Signed-off-by: Lei Wen --- kernel/sched/core.c | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/kernel/sched/core.c b/ke

Re: Question regarding list_for_each_entry_safe usage in move_one_task

2013-09-09 Thread Lei Wen
On Mon, Sep 9, 2013 at 7:15 PM, Peter Zijlstra wrote: > On Mon, Sep 02, 2013 at 02:26:45PM +0800, Lei Wen wrote: >> Hi Peter, >> >> I find one list API usage may not be correct in current fair.c code. >> In move_one_task function, it may iterate through whole cfs_tasks

Question regarding list_for_each_entry_safe usage in move_one_task

2013-09-01 Thread Lei Wen
Hi Peter, I find one list API usage may not be correct in current fair.c code. In move_one_task function, it may iterate through whole cfs_tasks list to get one task to move. But in dequeue_task(), it would delete one task node from list without the lock protection. So that we could see from list

Re: [PATCH 03/10] sched: Clean-up struct sd_lb_stat

2013-08-26 Thread Lei Wen
On Mon, Aug 26, 2013 at 12:36 PM, Paul Turner wrote: > On Sun, Aug 25, 2013 at 7:56 PM, Lei Wen wrote: >> On Tue, Aug 20, 2013 at 12:01 AM, Peter Zijlstra >> wrote: >>> From: Joonsoo Kim >>> >>> There is no reason to maintain separate variabl

Re: [PATCH 03/10] sched: Clean-up struct sd_lb_stat

2013-08-25 Thread Lei Wen
On Tue, Aug 20, 2013 at 12:01 AM, Peter Zijlstra wrote: > From: Joonsoo Kim > > There is no reason to maintain separate variables for this_group > and busiest_group in sd_lb_stat, except saving some space. > But this structure is always allocated in stack, so this saving > isn't really benificial

Re: false nr_running check in load balance?

2013-08-18 Thread Lei Wen
Paul, On Tue, Aug 13, 2013 at 5:25 PM, Paul Turner wrote: > On Tue, Aug 13, 2013 at 1:18 AM, Lei Wen wrote: >> Hi Paul, >> >> On Tue, Aug 13, 2013 at 4:08 PM, Paul Turner wrote: >>> On Tue, Aug 13, 2013 at 12:38 AM, Peter Zijlstra >>> wrote: >>>

[PATCH 8/8] sched: document the difference between nr_running and h_nr_running

2013-08-18 Thread Lei Wen
Signed-off-by: Lei Wen --- kernel/sched/sched.h |6 ++ 1 files changed, 6 insertions(+), 0 deletions(-) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index ef0a7b2..b8f0924 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -248,6 +248,12 @@ struct cfs_bandwidth

[PATCH 0/8] sched: fixes for the nr_running usage

2013-08-18 Thread Lei Wen
Since it is different for the nr_running and h_nr_running in its presenting meaning, we should take care of their usage in the scheduler. Lei Wen (8): sched: change load balance number to h_nr_running of run queue sched: change cpu_avg_load_per_task using h_nr_running sched: change

[PATCH 7/8] sched: change active_load_balance_cpu_stop to use h_nr_running

2013-08-18 Thread Lei Wen
ove. Signed-off-by: Lei Wen --- kernel/sched/fair.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 3656603..4c96124 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5349,7 +5349,7 @@ static

[PATCH 6/8] sched: change find_busiest_queue to h_nr_running

2013-08-18 Thread Lei Wen
Since find_busiest_queue try to avoid do load balance for runqueue which has only one cfs task and its load is above the imbalance value calculated, we should use h_nr_running of cfs instead of nr_running of rq. Signed-off-by: Lei Wen --- kernel/sched/fair.c |3 ++- 1 files changed, 2

[PATCH 5/8] sched: change update_sg_lb_stats to h_nr_running

2013-08-18 Thread Lei Wen
Since update_sg_lb_stats is used to calculate sched_group load difference of cfs type task, it should use h_nr_running instead of nr_running of rq. Signed-off-by: Lei Wen --- kernel/sched/fair.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/kernel/sched/fair.c b

[PATCH 4/8] sched: change pick_next_task_fair to h_nr_running

2013-08-18 Thread Lei Wen
Since pick_next_task_fair only want to ensure there is some task in the run queue to be picked up, it should use the h_nr_running instead of nr_running, since nr_running cannot present all tasks if group existed. Signed-off-by: Lei Wen --- kernel/sched/fair.c |2 +- 1 files changed, 1

[PATCH 2/8] sched: change cpu_avg_load_per_task using h_nr_running

2013-08-18 Thread Lei Wen
Since cpu_avg_load_per_task is used only by cfs scheduler, its meaning should present the average cfs type task load in the current run queue. Thus we change it to h_nr_running for well presenting its meaning. Signed-off-by: Lei Wen --- kernel/sched/fair.c |2 +- 1 files changed, 1

[PATCH 3/8] sched: change update_rq_runnable_avg using h_nr_running

2013-08-18 Thread Lei Wen
control mechanism. Thus its sleep time should not being taken into runnable avg load calculation. Signed-off-by: Lei Wen --- kernel/sched/fair.c |4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e6b99b4..9869d4d 100644 --- a

[PATCH 1/8] sched: change load balance number to h_nr_running of run queue

2013-08-18 Thread Lei Wen
gned-off-by: Lei Wen --- kernel/sched/fair.c |8 +--- 1 files changed, 5 insertions(+), 3 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index f918635..d6153c8 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5096,17 +5096,19 @@ redo: schedsta

Re: false nr_running check in load balance?

2013-08-13 Thread Lei Wen
Hi Paul, On Tue, Aug 13, 2013 at 4:08 PM, Paul Turner wrote: > On Tue, Aug 13, 2013 at 12:38 AM, Peter Zijlstra wrote: >> On Tue, Aug 13, 2013 at 12:45:12PM +0800, Lei Wen wrote: >>> > Not quite right; I think you need busiest->cfs.h_nr_running. >>> > cfs

Re: false nr_running check in load balance?

2013-08-12 Thread Lei Wen
Peter, On Mon, Aug 12, 2013 at 10:43 PM, Peter Zijlstra wrote: > On Tue, Aug 06, 2013 at 09:23:46PM +0800, Lei Wen wrote: >> Hi Paul, >> >> I notice in load_balance function, it would check busiest->nr_running >> to decide whether to perform the real task movement.

false nr_running check in load balance?

2013-08-06 Thread Lei Wen
Hi Paul, I notice in load_balance function, it would check busiest->nr_running to decide whether to perform the real task movement. But in some case, I saw the nr_running is not matching with the task in the queue, which seems make scheduler to do many redundant checking. What I means is like the

task kworker/u:0 blocked for more than 120 seconds

2013-07-03 Thread Lei Wen
Hi list, I recently find a strange issue over 3.4 kernel. The scenario is doing the hotplug test over ARM platfrom, and when the hotplugged out cpu1 want to get back in again, seems it stuck at cpu_stop_cpu_callback. The task backtrace is as below: PID: 21749 TASK: d194b300 CPU: 0 COMMAND: "k

Re: [V3 1/2] sched: add trace events for task and rq usage tracking

2013-07-03 Thread Lei Wen
Hi Peter, Do you have some further suggestion for this patch? :) Thanks, Lei On Tue, Jul 2, 2013 at 8:15 PM, Lei Wen wrote: > Since we could track task in the entity level now, we may want to > investigate tasks' running status by recording the trace info, so that > could make

[V3 2/2] sched: update cfs_rq weight earlier in enqueue_entity

2013-07-02 Thread Lei Wen
eople may get confused. Signed-off-by: Lei Wen Cc: Alex Shi Cc: Paul Turner --- kernel/sched/fair.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2290469..53224d1 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/f

[V3 1/2] sched: add trace events for task and rq usage tracking

2013-07-02 Thread Lei Wen
Since we could track task in the entity level now, we may want to investigate tasks' running status by recording the trace info, so that could make some tuning if needed. Signed-off-by: Lei Wen Cc: Alex Shi Cc: Peter Zijlstra Cc: Kamalesh Babulal --- include/trace/events/sched.h |

[PATCH V3 0/2] sched: add trace event for per-entity tracking

2013-07-02 Thread Lei Wen
ake trace events passing parameter being simple, and only extend its detail in the header file definition. Thanks Peter for pointing out this. V2: Abstract sched_cfs_rq_runnable_load and sched_cfs_rq_blocked_load using sched_cfs_rq_load_contri_template. Thanks Kamalesh for this contribut

Re: [V2 2/2] sched: update cfs_rq weight earlier in enqueue_entity

2013-07-01 Thread Lei Wen
Paul, On Mon, Jul 1, 2013 at 10:07 PM, Paul Turner wrote: > Could you please restate the below? > > On Mon, Jul 1, 2013 at 5:33 AM, Lei Wen wrote: >> Since we are going to calculate cfs_rq's average ratio by >> runnable_load_avg/load.weight > > I don&#x

Re: [V2 1/2] sched: add trace events for task and rq usage tracking

2013-07-01 Thread Lei Wen
Hi Peter, On Mon, Jul 1, 2013 at 8:44 PM, Peter Zijlstra wrote: > On Mon, Jul 01, 2013 at 08:33:21PM +0800, Lei Wen wrote: >> Since we could track task in the entity level now, we may want to >> investigate tasks' running status by recording the trace info, so that >>

[V2 2/2] sched: update cfs_rq weight earlier in enqueue_entity

2013-07-01 Thread Lei Wen
Since we are going to calculate cfs_rq's average ratio by runnable_load_avg/load.weight, if not increase the load.weight prior to enqueue_entity_load_avg, it may lead to one cfs_rq's avg ratio higher than 100%. Adjust the sequence, so that all ratio is kept below 100%. Signed-off-b

[PATCH V2 0/2] sched: add trace event for per-entity tracking

2013-07-01 Thread Lei Wen
oad distribution status in the whole system V2: Abstract sched_cfs_rq_runnable_load and sched_cfs_rq_blocked_load using sched_cfs_rq_load_contri_template. Thanks Kamalesh for this contribution! Lei Wen (2): sched: add trace events for task and rq usage tracking sched: update cfs_rq w

[V2 1/2] sched: add trace events for task and rq usage tracking

2013-07-01 Thread Lei Wen
Since we could track task in the entity level now, we may want to investigate tasks' running status by recording the trace info, so that could make some tuning if needed. Signed-off-by: Lei Wen --- include/trace/events/sched.h | 57 ++ kernel/

Re: [PATCH 1/2] sched: add trace events for task and rq usage tracking

2013-07-01 Thread Lei Wen
Hi Kamalesh, On Mon, Jul 1, 2013 at 5:43 PM, Kamalesh Babulal wrote: > * Lei Wen [2013-07-01 15:10:32]: > >> Since we could track task in the entity level now, we may want to >> investigate tasks' running status by recording the trace info, so that >> cou

Re: [PATCH 0/2] sched: add trace event for per-entity tracking

2013-07-01 Thread Lei Wen
Alex, On Mon, Jul 1, 2013 at 4:06 PM, Alex Shi wrote: > On 07/01/2013 03:10 PM, Lei Wen wrote: >> Thanks for the per-entity tracking feature, we could know the details of >> each task by its help. >> This patch add its trace support, so that we could quickly know the system

[PATCH 2/2] sched: update cfs_rq weight earlier in enqueue_entity

2013-07-01 Thread Lei Wen
Since we are going to calculate cfs_rq's average ratio by runnable_load_avg/load.weight, if not increase the load.weight prior to enqueue_entity_load_avg, it may lead to one cfs_rq's avg ratio higher than 100%. Adjust the sequence, so that all ratio is kept below 100%. Signed-off-b

[PATCH 0/2] sched: add trace event for per-entity tracking

2013-07-01 Thread Lei Wen
o = cfs_rq->runnable_load_avg/cfs_rq->load.weight Lei Wen (2): sched: add trace events for task and rq usage tracking sched: update cfs_rq weight earlier in enqueue_entity include/trace/events/sched.h | 73 ++ kernel/sched/fair.c | 31 +++

[PATCH 1/2] sched: add trace events for task and rq usage tracking

2013-07-01 Thread Lei Wen
Since we could track task in the entity level now, we may want to investigate tasks' running status by recording the trace info, so that could make some tuning if needed. Signed-off-by: Lei Wen --- include/trace/events/sched.h | 73 ++ kernel/

Re: [PATCH] sched: add heuristic logic to pick idle peers

2013-06-23 Thread Lei Wen
Hi Michael, On Mon, Jun 17, 2013 at 2:44 PM, Michael Wang wrote: > On 06/17/2013 01:08 PM, Lei Wen wrote: >> Hi Michael, >> >> On Mon, Jun 17, 2013 at 11:27 AM, Michael Wang >> wrote: >>> Hi, Lei >>> >>> On 06/17/2013 10:21 AM, Lei Wen w

Re: [patch v8 4/9] sched: fix slept time double counting in enqueue entity

2013-06-21 Thread Lei Wen
On Fri, Jun 21, 2013 at 7:09 PM, Alex Shi wrote: > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c61a614..9640c66 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5856,7 +5856,8 @@ static void switched_from_fair(struct rq *rq, struct task_

Re: [patch v8 4/9] sched: fix slept time double counting in enqueue entity

2013-06-21 Thread Lei Wen
Alex, On Fri, Jun 21, 2013 at 4:56 PM, Alex Shi wrote: > On 06/21/2013 10:50 AM, Lei Wen wrote: >> I see your point... I made the mistake that update the wrong patch... >> Please help check this one. >> >> commit 5fc3d5c74f8359ef382d9a20ffe657ffc237c109 >> Autho

Re: [patch v8 3/9] sched: set initial value of runnable avg for new forked task

2013-06-20 Thread Lei Wen
Morten, On Thu, Jun 20, 2013 at 6:23 PM, Morten Rasmussen wrote: > On Sat, Jun 15, 2013 at 01:09:12PM +0100, Lei Wen wrote: >> On Fri, Jun 14, 2013 at 9:59 PM, Alex Shi wrote: >> > On 06/14/2013 06:02 PM, Lei Wen wrote: >> >

Re: [patch v8 4/9] sched: fix slept time double counting in enqueue entity

2013-06-20 Thread Lei Wen
Alex, On Fri, Jun 21, 2013 at 10:39 AM, Alex Shi wrote: > On 06/21/2013 10:30 AM, Lei Wen wrote: >> Hi Alex, >> >> On Thu, Jun 20, 2013 at 10:59 PM, Alex Shi wrote: >>> On 06/20/2013 10:46 AM, Lei Wen wrote: >>>> >>>> >

Re: [patch v8 4/9] sched: fix slept time double counting in enqueue entity

2013-06-20 Thread Lei Wen
Hi Alex, On Thu, Jun 20, 2013 at 10:59 PM, Alex Shi wrote: > On 06/20/2013 10:46 AM, Lei Wen wrote: >> >> >> But here I have a question, there is another usage of >> __synchronzie_entity_decay >> in current kernel, in the switched_from_fair function. >&g

Re: [patch v8 4/9] sched: fix slept time double counting in enqueue entity

2013-06-19 Thread Lei Wen
On Thu, Jun 20, 2013 at 9:43 AM, Lei Wen wrote: > Hi Alex, > > On Mon, Jun 17, 2013 at 11:41 PM, Alex Shi wrote: >> On 06/17/2013 07:51 PM, Paul Turner wrote: >>> Can you add something like: >>> >>> + /* >>> +

Re: [patch v8 4/9] sched: fix slept time double counting in enqueue entity

2013-06-19 Thread Lei Wen
Hi Alex, On Mon, Jun 17, 2013 at 11:41 PM, Alex Shi wrote: > On 06/17/2013 07:51 PM, Paul Turner wrote: >> Can you add something like: >> >> + /* >> +* Task re-woke on same cpu (or else >> migrate_task_rq_fair() >> +* would have made count negative);

Re: Question regarding put_prev_task in preempted condition

2013-06-18 Thread Lei Wen
Hi Peter, On Tue, Jun 18, 2013 at 5:55 PM, Peter Zijlstra wrote: > On Sun, Jun 09, 2013 at 11:59:36PM +0800, Lei Wen wrote: >> Hi Peter, >> >> While I am checking the preempt related code, I find a interesting part. >> That is when preempt_schedule is called, for

[PATCH v4 1/4] sched: reduce calculation effort in fix_small_imbalance

2013-06-18 Thread Lei Wen
Actually all below item could be repalced by scaled_busy_load_per_task (sds->busiest_load_per_task * SCHED_POWER_SCALE) /sds->busiest->sgp->power; Signed-off-by: Lei Wen --- kernel/sched/fair.c | 19 --- 1 file changed, 8 insertions(+),

[PATCH v4 2/4] sched: scale the busy and this queue's per-task load before compare

2013-06-18 Thread Lei Wen
be cpu power gain in move the load. Signed-off-by: Lei Wen --- kernel/sched/fair.c | 55 +++ 1 file changed, 38 insertions(+), 17 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 28052fa..fd9cbee 100644 --- a/kernel/sched/

[PATCH v4 4/4] sched: adjust fix_small_imbalance moving task condition

2013-06-18 Thread Lei Wen
>, to stop this loop. Signed-off-by: Lei Wen --- kernel/sched/fair.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c478022..3be7844 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4717,7 +4717,7 @@ void fix

[PATCH v4 3/4] sched: scale cpu load for judgment of group imbalance

2013-06-18 Thread Lei Wen
e: if ((max_cpu_load - min_cpu_load) >= avg_load_per_task && (max_nr_running - min_nr_running) > 1) It makes (512-128)>=((512+128)/4), and lead to imbalance conclusion... Make the load as scaled, to avoid such case. Signed-off-by: Lei Wen --- kernel/sched/fair.c |

[PATCH v4 0/4] small fix for scale usage

2013-06-18 Thread Lei Wen
sds->avg_load when there is not balanced group inside the domain V4: env->imbalance should be applied with not scaled value, fix according to it. Fix one ping-pong adjustment for special case. Lei Wen (4): sched: reduce calculation effort in fix_small_imbalance sched: scale the busy

[PATCH] sched: fix underflow when doing fix_small_imbalance

2013-06-18 Thread Lei Wen
busy_load_per_task >= (scaled_busy_load_per_task * imbn) This would make load balance happen, even the busiest group's load is less than local group's load... Signed-off-by: Lei Wen --- kernel/sched/fair.c |6 -- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/kernel/sched/fai

[PATCH] sched: fix load_above_capacity underflow

2013-06-17 Thread Lei Wen
Apparently we don't want to see sds->busiest_nr_running is small than sds->busiest_group_capacity, while our load_above_capacity is an "unsigned long" type. Signed-off-by: Lei Wen --- kernel/sched/fair.c | 13 + 1 file changed, 9 insertions(+), 4 deletions(-)

[PATCH v3 3/3] sched: scale cpu load for judgment of group imbalance

2013-06-17 Thread Lei Wen
e: if ((max_cpu_load - min_cpu_load) >= avg_load_per_task && (max_nr_running - min_nr_running) > 1) It makes (512-128)>=((512+128)/4), and lead to imbalance conclusion... Make the load as scaled, to avoid such case. Signed-off-by: Lei Wen --- kernel/sched/fair.c |

[PATCH v3 1/3] sched: reduce calculation effort in fix_small_imbalance

2013-06-17 Thread Lei Wen
Actually all below item could be repalced by scaled_busy_load_per_task (sds->busiest_load_per_task * SCHED_POWER_SCALE) /sds->busiest->sgp->power; Signed-off-by: Lei Wen --- kernel/sched/fair.c | 19 --- 1 file changed, 8 insertions(+),

[PATCH v3 0/3] small fix for scale usage

2013-06-17 Thread Lei Wen
sds->avg_load when there is not balanced group inside the domain Lei Wen (3): sched: reduce calculation effort in fix_small_imbalance sched: scale the busy and this queue's per-task load before compare sched: scale cpu load for judgment of group imbalance kernel/sched/fa

[PATCH v3 2/3] sched: scale the busy and this queue's per-task load before compare

2013-06-17 Thread Lei Wen
be cpu power gain in move the load. Signed-off-by: Lei Wen --- kernel/sched/fair.c | 38 +- 1 file changed, 25 insertions(+), 13 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 28052fa..6173095 100644 --- a/kernel/sched/fair.c +++ b/

[PATCH 3/3] sched: scale cpu load for judgment of group imbalance

2013-06-17 Thread Lei Wen
e: if ((max_cpu_load - min_cpu_load) >= avg_load_per_task && (max_nr_running - min_nr_running) > 1) It makes (512-128)>=((512+128)/4), and lead to imbalance conclusion... Make the load as scaled, to avoid such case. Signed-off-by: Lei Wen --- kernel/sched/fair.c |

[PATCH 1/3] sched: reduce calculation effort in fix_small_imbalance

2013-06-17 Thread Lei Wen
Actually all below item could be repalced by scaled_busy_load_per_task (sds->busiest_load_per_task * SCHED_POWER_SCALE) /sds->busiest->sgp->power; Signed-off-by: Lei Wen --- kernel/sched/fair.c | 19 --- 1 file changed, 8 insertions(+),

[PATCH 2/3] sched: scale the busy and this queue's per-task load before compare

2013-06-17 Thread Lei Wen
be cpu power gain in move the load. Signed-off-by: Lei Wen --- kernel/sched/fair.c | 28 +++- 1 file changed, 19 insertions(+), 9 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 28052fa..77a149c 100644 --- a/kernel/sched/fair.c +++ b/kernel

[PATCH v2 0/3] small fix for scale usage

2013-06-17 Thread Lei Wen
Here are three patches which correct scale usage in both fix_small_imbalance and update_sg_lb_stats. And give out comment over when fix_small_imbalance would cause load change. V2: fix scale usage for update_sg_lb_stats Lei Wen (3): sched: reduce calculation effort in fix_small_imbalance

Re: [PATCH] sched: add heuristic logic to pick idle peers

2013-06-17 Thread Lei Wen
Hi Michael, On Mon, Jun 17, 2013 at 2:44 PM, Michael Wang wrote: > On 06/17/2013 01:08 PM, Lei Wen wrote: >> Hi Michael, >> >> On Mon, Jun 17, 2013 at 11:27 AM, Michael Wang >> wrote: >>> Hi, Lei >>> >>> On 06/17/2013 10:21 AM, Lei Wen w

Re: [patch v8 3/9] sched: set initial value of runnable avg for new forked task

2013-06-17 Thread Lei Wen
Hi Peter, On Mon, Jun 17, 2013 at 5:20 PM, Peter Zijlstra wrote: > On Fri, Jun 14, 2013 at 06:02:45PM +0800, Lei Wen wrote: >> Hi Alex, >> >> On Fri, Jun 7, 2013 at 3:20 PM, Alex Shi wrote: >> > We need initialize the se.avg.{decay_count, load_avg_contr

Re: [PATCH] sched: add heuristic logic to pick idle peers

2013-06-16 Thread Lei Wen
Hi Michael, On Mon, Jun 17, 2013 at 11:27 AM, Michael Wang wrote: > Hi, Lei > > On 06/17/2013 10:21 AM, Lei Wen wrote: >> nr_busy_cpus in sched_group_power structure cannot present the purpose >> for judging below statement: >> "this cpu's scheduler gr

[PATCH] sched: add heuristic logic to pick idle peers

2013-06-16 Thread Lei Wen
ginal purpose to add this logic still looks good. So we move this kind of logic to find_new_ilb, so that we could pick out peer from our sharing resource domain whenever possible. Signed-off-by: Lei Wen --- kernel/sched/fair.c | 28 ++-- 1 file changed, 22 insertions(+), 6

Re: [patch v8 3/9] sched: set initial value of runnable avg for new forked task

2013-06-15 Thread Lei Wen
On Fri, Jun 14, 2013 at 9:59 PM, Alex Shi wrote: > On 06/14/2013 06:02 PM, Lei Wen wrote: >>> > enqueue_entity >>> > enqueue_entity_load_avg >>> > >>> > and make forking balancing imbalance since incorrect load_avg_contrib

Re: [patch v8 3/9] sched: set initial value of runnable avg for new forked task

2013-06-14 Thread Lei Wen
Hi Alex, On Fri, Jun 7, 2013 at 3:20 PM, Alex Shi wrote: > We need initialize the se.avg.{decay_count, load_avg_contrib} for a > new forked task. > Otherwise random values of above variables cause mess when do new task > enqueue: > enqueue_task_fair > enqueue_entity > enqu

[PATCH 1/2] sched: reduce calculation effort in fix_small_imbalance

2013-06-13 Thread Lei Wen
Actually all below item could be repalced by scaled_busy_load_per_task (sds->busiest_load_per_task * SCHED_POWER_SCALE) /sds->busiest->sgp->power; Signed-off-by: Lei Wen --- kernel/sched/fair.c | 19 --- 1 file changed, 8 insertions(+),

[PATCH 0/2] small fix for fix_small_imbalance

2013-06-13 Thread Lei Wen
Here is two patches which correct the scale usage in the fix_small_balance, and give out comment over when fix_small_imbalance would cause load change. Lei Wen (2): sched: reduce calculation effort in fix_small_imbalance sched: scale the busy and this queue's per-task load before co

[PATCH 2/2] sched: scale the busy and this queue's per-task load before compare

2013-06-13 Thread Lei Wen
be cpu power gain in move the load. Signed-off-by: Lei Wen --- kernel/sched/fair.c | 28 +++- 1 file changed, 19 insertions(+), 9 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 28052fa..77a149c 100644 --- a/kernel/sched/fair.c +++ b/kernel

Question regarding put_prev_task in preempted condition

2013-06-09 Thread Lei Wen
Hi Peter, While I am checking the preempt related code, I find a interesting part. That is when preempt_schedule is called, for its preempt_count be added PREEMPT_ACTIVE, so in __schedule() it could not be dequeued from rq by deactivate_task. Thus in put_prev_task, which is called a little later

Re: workqueue panic in 3.4 kernel

2013-03-11 Thread Lei Wen
On Tue, Mar 12, 2013 at 2:13 PM, Tejun Heo wrote: > On Tue, Mar 12, 2013 at 02:01:16PM +0800, Lei Wen wrote: >> I see... >> How about only check those workqueue structure not on stack? >> For current onstack usage is rare, and should be easier to check with. > > No, k

Re: workqueue panic in 3.4 kernel

2013-03-11 Thread Lei Wen
On Tue, Mar 12, 2013 at 1:40 PM, Tejun Heo wrote: > On Tue, Mar 12, 2013 at 01:34:56PM +0800, Lei Wen wrote: >> > Memory areas aren't always zero on allocation. >> >> Shouldn't work structure be allocated with kzalloc? > > It's not required to.

Re: workqueue panic in 3.4 kernel

2013-03-11 Thread Lei Wen
On Tue, Mar 12, 2013 at 1:24 PM, Tejun Heo wrote: > On Tue, Mar 12, 2013 at 01:18:01PM +0800, Lei Wen wrote: >> > You're initializing random piece of memory which may contain any >> > garbage and triggering BUG if some bit is set on it. No, you can't do >> &

Re: workqueue panic in 3.4 kernel

2013-03-11 Thread Lei Wen
Tejun, On Tue, Mar 12, 2013 at 1:12 PM, Tejun Heo wrote: > Hello, > > On Tue, Mar 12, 2013 at 01:08:15PM +0800, Lei Wen wrote: >> diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h >> index 8afab27..425d5a2 100644 >> --- a/include/linux/workque

Re: workqueue panic in 3.4 kernel

2013-03-07 Thread Lei Wen
Tejun, On Thu, Mar 7, 2013 at 9:15 AM, Lei Wen wrote: > Hi Tejun, > > On Thu, Mar 7, 2013 at 3:14 AM, Tejun Heo wrote: >> Hello, Lei. >> >> On Wed, Mar 06, 2013 at 10:39:15PM +0800, Lei Wen wrote: >>> We find

Re: workqueue panic in 3.4 kernel

2013-03-06 Thread Lei Wen
Hi Tejun, On Thu, Mar 7, 2013 at 3:14 AM, Tejun Heo wrote: > Hello, Lei. > > On Wed, Mar 06, 2013 at 10:39:15PM +0800, Lei Wen wrote: >> We find a race condition as below: >> CPU0 CPU1 >> timer inter

Re: workqueue panic in 3.4 kernel

2013-03-06 Thread Lei Wen
Hi Tejun On Wed, Mar 6, 2013 at 12:32 AM, Tejun Heo wrote: > Hello, > > On Tue, Mar 05, 2013 at 03:31:45PM +0800, Lei Wen wrote: >> With checking memory, we find work->data becomes 0x300, when it try >> to call get_work_cwq > > Why would that become 0x300? Who

workqueue panic in 3.4 kernel

2013-03-04 Thread Lei Wen
Hi Tejun, We met one panic issue related workqueue based over 3.4.5 Linux kernel. Panic log as: [153587.035369] Unable to handle kernel NULL pointer dereference at virtual address 0004 [153587.043731] pgd = e1e74000 [153587.046691] [0004] *pgd= [153587.050567] Internal error: Oops