RE: [PATCH v5 06/13] lockdep: Implement crossrelease feature

2017-03-01 Thread byungchul.park
> -Original Message- > From: Matthew Wilcox [mailto:wi...@infradead.org] > Sent: Thursday, March 02, 2017 1:20 PM > To: Peter Zijlstra > Cc: Byungchul Park; mi...@kernel.org; t...@linutronix.de; > wal...@google.com; boqun.f...@gmail.com; kir...@shutemov.name; linux- > ker...@vger.kernel.org

RE: [PATCH v2 2/2] sched/deadline: Change the way to replenish runtime for sleep tasks

2017-02-22 Thread byungchul.park
> -Original Message- > From: Byungchul Park [mailto:byungchul.p...@lge.com] > Sent: Thursday, February 23, 2017 12:11 PM > To: pet...@infradead.org; mi...@kernel.org > Cc: linux-kernel@vger.kernel.org; juri.le...@gmail.com; > rost...@goodmis.org; kernel-t...@lge.com > Subject: [PATCH v2 2/2

RE: [PATCH v3 2/2] sched/rt: Remove unnecessary condition in push_rt_task()

2017-02-15 Thread byungchul.park
> -Original Message- > From: Steven Rostedt [mailto:rost...@goodmis.org] > Sent: Thursday, February 16, 2017 11:46 AM > To: Byungchul Park > Cc: pet...@infradead.org; mi...@kernel.org; linux-kernel@vger.kernel.org; > juri.le...@gmail.com; kernel-t...@lge.com > Subject: Re: [PATCH v3 2/2] sc

RE: [PATCH v4 15/15] lockdep: Crossrelease feature documentation

2017-01-18 Thread byungchul.park
> -Original Message- > From: byungchul.park [mailto:byungchul.p...@lge.com] > Sent: Wednesday, January 18, 2017 9:15 PM > To: 'Peter Zijlstra' > Cc: 'Boqun Feng'; 'mi...@kernel.org'; 't...@linutronix.de'; > 'wal...@google

RE: [PATCH v4 15/15] lockdep: Crossrelease feature documentation

2017-01-18 Thread byungchul.park
> -Original Message- > From: Peter Zijlstra [mailto:pet...@infradead.org] > Sent: Wednesday, January 18, 2017 9:08 PM > To: Byungchul Park > Cc: Boqun Feng; mi...@kernel.org; t...@linutronix.de; wal...@google.com; > kir...@shutemov.name; linux-kernel@vger.kernel.org; linux...@kvack.org; > i

RE: [RFC 12/12] x86/dumpstack: Optimize save_stack_trace

2016-06-20 Thread byungchul.park
> -Original Message- > From: xinhui [mailto:xinhui@linux.vnet.ibm.com] > Sent: Monday, June 20, 2016 4:29 PM > To: Byungchul Park; pet...@infradead.org; mi...@kernel.org > Cc: linux-kernel@vger.kernel.org; npig...@suse.de; wal...@google.com; > a...@suse.de; t...@inhelltoy.tec.linutronix

RE: [RFC][PATCH v4 1/2] printk: Make printk() completely async

2016-03-19 Thread byungchul.park
> [..] > > diff --git a/kernel/locking/spinlock_debug.c > b/kernel/locking/spinlock_debug.c > > index fd24588..30559c6 100644 > > --- a/kernel/locking/spinlock_debug.c > > +++ b/kernel/locking/spinlock_debug.c > > @@ -138,14 +138,25 @@ static void __spin_lock_debug(raw_spinlock_t *lock) > > { > >

RE: [PATCH v4] lib/spinlock_debug.c: prevent a recursive cycle in the debug code

2016-01-27 Thread byungchul.park
> From: Sergey Senozhatsky [mailto:sergey.senozhatsky.w...@gmail.com] > Sent: Thursday, January 28, 2016 11:38 AM > To: Byungchul Park > Cc: a...@linux-foundation.org; mi...@kernel.org; linux- > ker...@vger.kernel.org; akinobu.m...@gmail.com; j...@suse.cz; > torva...@linux-foundation.org; pe...@hur

[PATCH] sched: modify the comment about lock assumption in migrate_task_rq_fair()

2015-11-17 Thread byungchul.park
From: Byungchul Park The comment describing migrate_task_rq_fair() says that the caller should hold p->pi_lock. But in some other case, the caller can hold task_rq(p)->lock instead of p->pi_lock. So the comment is broken and this patch fixs it. Signed-off-by: Byungchul Park --- kernel/sched/fa

[PATCH] sched: prevent getting too much vruntime

2015-11-11 Thread byungchul.park
From: Byungchul Park Especially in the case below, se->vruntime can be too large and scheduling cannot work properly. 1. set se->vruntime to "cfs_rq->min_vruntime - sysctl_sched_latency" in place_entity() when detaching the se from cfs_rq. 2. do a normalization by "se->vruntime -= cfs_rq->min

[PATCH v5 0/2] sched: consider missed ticks when updating cpu load

2015-11-09 Thread byungchul.park
From: Byungchul Park change from v4 to v5 - fix comments and commit message - take new_load into account in update_cpu_load_nohz() and __update_cpu_load() because it's non-zero in NOHZ_FULL change from v3 to v4 - focus the problem on full NOHZ change from v2 to v3 - add a patch which make __u

[PATCH v5 1/2] sched: make __update_cpu_load() handle NOHZ_FULL tickless

2015-11-09 Thread byungchul.park
From: Byungchul Park __update_cpu_load() assumes that a cpu is idle if the interval between the cpu's ticks is more than 1/HZ. However the cpu can be non-idle even though the interval is more than 1/HZ, in the NOHZ_FULL case. Thus in the NOHZ_FULL tickless case, the current way to update cpu loa

[PATCH v5 2/2] sched: make update_cpu_load_nohz() consider missed ticks in NOHZ_FULL

2015-11-09 Thread byungchul.park
From: Byungchul Park Usually tick can be stopped for an idle cpu in NOHZ. However in NOHZ_FULL, a non-idle cpu's tick also can be stopped. However, update_cpu_load_nohz() does not consider the case a non-idle cpu's tick has been stopped at all. This patch makes the update_cpu_load_nohz() know if

[PATCH v4 2/3] sched/fair: split the remove_entity_load_avg() into two functions

2015-10-23 Thread byungchul.park
From: Byungchul Park remove_entity_load_avg() consists of two parts. the first part is for updating se's last_update_time and the second part is for removing se's load from cfs_rq. it can become necessary to use only the first part or second part, for the purpose of optimization. so this patch sp

[PATCH v4 1/3] sched/fair: make it possible to account fair load avg consistently

2015-10-23 Thread byungchul.park
From: Byungchul Park Current code can account fair class load average for the time the task was absent from the fair class thanks to ATTACH_AGE_LOAD. However, it doesn't work in the cases that either migration or group change happened in the other sched classes. This patch introduces more genera

[PATCH v4 3/3] sched: optimize migration by forcing rmb() and updating to be called once

2015-10-23 Thread byungchul.park
From: Byungchul Park the "sched/fair: make it possible to account fair load avg consistently" patch makes rmb() and updating last_update_time called twice when doing a migration, which can be negative at performance. actually we can optimize it by omiting the updating part of remove_entity_load_a

[PATCH v4 0/3] sched: account fair load avg consistently

2015-10-23 Thread byungchul.park
From: Byungchul Park * change from v3 to v4 - optimize - force to use rmb() once when doing migration - optimize - not add additional variable to task_struct - change the order of migrate_task_rq() and __set_task_cpu() * change from v2 to v3 - consider otimization in the case of migration - spli

[PATCH v3 2/3] sched: factor out the code getting cfs_rq's last_update_time

2015-10-15 Thread byungchul.park
From: Byungchul Park In the case that rq->lock may not be held, care must be taken to get a cfs_rq's last_update_time instead of just reading the variable. Since it can happen at serveral places in the code in future, this patch factors it out to a helper function. Signed-off-by: Byungchul Park

[PATCH v3 1/3] sched: introduce a new migration flag to task_struct

2015-10-15 Thread byungchul.park
From: Byungchul Park This patch removes a weird coupling between se->avg.last_update_time and the condition checking for migration, and introduce a new migration flag. Now, scheduler can use the flag instead of se->avg.last_update_time to check if migration already happened or not. Signed-off-by

[PATCH v3 0/3] sched: account fair load avg consistently

2015-10-15 Thread byungchul.park
From: Byungchul Park * change from v2 to v3 - consider otimization in the case of migration - split patches to 3 to be reviewed easily * change from v1 to v2 - make set_task_rq() do that role instead of migration callback - make set_task_rq() do that role instead of move group callback - remove

[PATCH v3 3/3] sched: make it possible to account fair class load avg consistently

2015-10-15 Thread byungchul.park
From: Byungchul Park Current code can account fair class load average for the time the task was absent from the fair class thanks to ATTACH_AGE_LOAD. However, it doesn't work in the cases that either migration or group change happened in the other sched classes. This patch introduces more genera

[PATCH v4 1/2] sched: make __update_cpu_load() handle active tickless case

2015-10-14 Thread byungchul.park
From: Byungchul Park There are some cases where distance between ticks is more then one tick, while the cpu is not idle, e.g. full NOHZ. However __update_cpu_load() assumes it is the idle tickless case if the distance between ticks is more than 1, even though it can be the active tickless case.

[PATCH v4 2/2] sched: consider missed ticks in full NOHZ

2015-10-14 Thread byungchul.park
From: Byungchul Park Even though the cpu is non-idle when its tick is stoped in full NOHZ, current "update_cpu_load" code considers as if the cpu has been idle unconditionally. It's wrong. This patch makes the "update_cpu_load" code know if the calling path comes from full NOHZ or idle NOHZ. Sig

[PATCH v4 0/2] sched: consider missed ticks when updating cpu load

2015-10-14 Thread byungchul.park
From: Byungchul Park change from v3 to v4 - focus the problem on full NOHZ change from v2 to v3 - add a patch which make __update_cpu_load() handle active tickless change from v1 to v2 - add some additional commit message (logic is same exactly) i will try to fix other stuffs caused by full NO

[PATCH v2 2/2] sched: make it possible to account fair class load avg consistently

2015-10-14 Thread byungchul.park
From: Byungchul Park Current code can account fair class load avg for time the task was on other class e.g. rt or dl thanks to ATTACH_AGE_LOAD. However, it does not work in the case that either migration or group change happened in the other classes. This patch introduces more general solution s

[PATCH v2 1/2] sched: make each sched class handle its rq assignment in their own class

2015-10-14 Thread byungchul.park
From: Byungchul Park set_task_rq() which is a commonly used function regardless of sched class is currently assigning both cfs_rq for fair class and rt_rq for rt class to a task. but it would be better that a class related operation is done by its own class respectively. additionally, this patch

[PATCH v2 0/2] sched: account fair load avg consistently

2015-10-14 Thread byungchul.park
From: Byungchul Park * change from v1 to v2 - make set_task_rq() do that role instead of migration callback - make set_task_rq() do that role instead of move group callback - remove the dependancy between last_update_time and check for migration Byungchul Park (2): sched: make each sched class

[PATCH 0/2] sched: make fair class handle rq/group changes by outside

2015-10-05 Thread byungchul.park
From: Byungchul Park Current fair sched class handles neither cgroup change nor rq migration occured within other sched class e.g. rt class. This patch makes it can do that. Byungchul Park (2): sched: make fair sched class can handle the cgroup change by other class sched: make fair sche

[PATCH 2/2] sched: make fair sched class can handle migration by other class

2015-10-05 Thread byungchul.park
From: Byungchul Park Original fair sched class can handle the migration within its class with migrate_task_rq_fair(), but there is no way to know it if the migration happened outside. This patch makes the fair sched class can handle the migration which happened even at other sched class. And care

[PATCH 1/2] sched: make fair sched class can handle the cgroup change by other class

2015-10-05 Thread byungchul.park
From: Byungchul Park Original fair sched class can handle the cgroup change occured within its class with task_move_group_fair(), but there is no way to know it if the change happened outside. This patch makes the fair sched class can handle the change of cgroup which happened even at other sched

[PATCH v3 1/2] sched: make __update_cpu_load() handle active tickless case

2015-10-02 Thread byungchul.park
From: Byungchul Park There are some cases where distance between ticks is more then one tick, while the cpu is not idle, e.g. - full NOHZ - tracing - long lasting callbacks - being scheduled away when running in a VM However __update_cpu_load() assumes it is the idle tickless case if the di

[PATCH v3 2/2] sched: consider missed ticks when updating global cpu load

2015-10-02 Thread byungchul.park
From: Byungchul Park in hrtimer_interrupt(), the first tick_program_event() can be failed because the next timer could be already expired due to, (see the comment in hrtimer_interrupt()) - tracing - long lasting callbacks - being scheduled away when running in a VM in the case that the first ti

[PATCH v3 0/2] sched: consider missed ticks when updating cpu load

2015-10-02 Thread byungchul.park
From: Byungchul Park change from v3 to v2 - add a patch which make __update_cpu_load() handle active tickless change from v1 to v2 - add some additional commit message (logic is same exactly) Byungchul Park (2): sched: make __update_cpu_load() handle active tickless case sched: consider mis

[RESEND PATCH] sched: consider missed ticks when updating global cpu load

2015-09-25 Thread byungchul.park
From: Byungchul Park hello, i have already sent this patch about 1 month ago. (see https://lkml.org/lkml/2015/8/13/160) now, i am resending the same patch with adding some additional commit message. thank you, byungchul ->8- >From 8ece9a0482e74a39cd2e9165bf8eec1d04665fa9 Mon Sep 17 0

RE: [PATCH] sched: fix lose fair sleeper bonus in switch_to_fair()

2015-09-08 Thread byungchul.park
> -Original Message- > From: Wanpeng Li [mailto:wanpeng...@hotmail.com] > Sent: Tuesday, September 08, 2015 5:46 PM > To: Byungchul Park > Cc: Peter Zijlstra; Ingo Molnar; linux-kernel@vger.kernel.org; > yuyang...@intel.com > Subject: Re: [PATCH] sched: fix lose fair sleeper bonus in swit

RE: [PATCH] sched: fix lose fair sleeper bonus in switch_to_fair()

2015-09-08 Thread byungchul.park
> -Original Message- > From: Wanpeng Li [mailto:wanpeng...@hotmail.com] > Sent: Tuesday, September 08, 2015 5:39 PM > To: Byungchul Park > Cc: Peter Zijlstra; Ingo Molnar; linux-kernel@vger.kernel.org; > yuyang...@intel.com > Subject: Re: [PATCH] sched: fix lose fair sleeper bonus in swit