> -Original Message-
> From: Matthew Wilcox [mailto:wi...@infradead.org]
> Sent: Thursday, March 02, 2017 1:20 PM
> To: Peter Zijlstra
> Cc: Byungchul Park; mi...@kernel.org; t...@linutronix.de;
> wal...@google.com; boqun.f...@gmail.com; kir...@shutemov.name; linux-
> ker...@vger.kernel.org
> -Original Message-
> From: Byungchul Park [mailto:byungchul.p...@lge.com]
> Sent: Thursday, February 23, 2017 12:11 PM
> To: pet...@infradead.org; mi...@kernel.org
> Cc: linux-kernel@vger.kernel.org; juri.le...@gmail.com;
> rost...@goodmis.org; kernel-t...@lge.com
> Subject: [PATCH v2 2/2
> -Original Message-
> From: Steven Rostedt [mailto:rost...@goodmis.org]
> Sent: Thursday, February 16, 2017 11:46 AM
> To: Byungchul Park
> Cc: pet...@infradead.org; mi...@kernel.org; linux-kernel@vger.kernel.org;
> juri.le...@gmail.com; kernel-t...@lge.com
> Subject: Re: [PATCH v3 2/2] sc
> -Original Message-
> From: byungchul.park [mailto:byungchul.p...@lge.com]
> Sent: Wednesday, January 18, 2017 9:15 PM
> To: 'Peter Zijlstra'
> Cc: 'Boqun Feng'; 'mi...@kernel.org'; 't...@linutronix.de';
> 'wal...@google
> -Original Message-
> From: Peter Zijlstra [mailto:pet...@infradead.org]
> Sent: Wednesday, January 18, 2017 9:08 PM
> To: Byungchul Park
> Cc: Boqun Feng; mi...@kernel.org; t...@linutronix.de; wal...@google.com;
> kir...@shutemov.name; linux-kernel@vger.kernel.org; linux...@kvack.org;
> i
> -Original Message-
> From: xinhui [mailto:xinhui@linux.vnet.ibm.com]
> Sent: Monday, June 20, 2016 4:29 PM
> To: Byungchul Park; pet...@infradead.org; mi...@kernel.org
> Cc: linux-kernel@vger.kernel.org; npig...@suse.de; wal...@google.com;
> a...@suse.de; t...@inhelltoy.tec.linutronix
> [..]
> > diff --git a/kernel/locking/spinlock_debug.c
> b/kernel/locking/spinlock_debug.c
> > index fd24588..30559c6 100644
> > --- a/kernel/locking/spinlock_debug.c
> > +++ b/kernel/locking/spinlock_debug.c
> > @@ -138,14 +138,25 @@ static void __spin_lock_debug(raw_spinlock_t
*lock)
> > {
> >
> From: Sergey Senozhatsky [mailto:sergey.senozhatsky.w...@gmail.com]
> Sent: Thursday, January 28, 2016 11:38 AM
> To: Byungchul Park
> Cc: a...@linux-foundation.org; mi...@kernel.org; linux-
> ker...@vger.kernel.org; akinobu.m...@gmail.com; j...@suse.cz;
> torva...@linux-foundation.org; pe...@hur
From: Byungchul Park
The comment describing migrate_task_rq_fair() says that the caller
should hold p->pi_lock. But in some other case, the caller can hold
task_rq(p)->lock instead of p->pi_lock. So the comment is broken and
this patch fixs it.
Signed-off-by: Byungchul Park
---
kernel/sched/fa
From: Byungchul Park
Especially in the case below, se->vruntime can be too large and
scheduling cannot work properly.
1. set se->vruntime to "cfs_rq->min_vruntime - sysctl_sched_latency" in
place_entity() when detaching the se from cfs_rq.
2. do a normalization by "se->vruntime -= cfs_rq->min
From: Byungchul Park
change from v4 to v5
- fix comments and commit message
- take new_load into account in update_cpu_load_nohz() and __update_cpu_load()
because it's non-zero in NOHZ_FULL
change from v3 to v4
- focus the problem on full NOHZ
change from v2 to v3
- add a patch which make __u
From: Byungchul Park
__update_cpu_load() assumes that a cpu is idle if the interval between
the cpu's ticks is more than 1/HZ. However the cpu can be non-idle even
though the interval is more than 1/HZ, in the NOHZ_FULL case.
Thus in the NOHZ_FULL tickless case, the current way to update cpu loa
From: Byungchul Park
Usually tick can be stopped for an idle cpu in NOHZ. However in NOHZ_FULL,
a non-idle cpu's tick also can be stopped. However, update_cpu_load_nohz()
does not consider the case a non-idle cpu's tick has been stopped at all.
This patch makes the update_cpu_load_nohz() know if
From: Byungchul Park
remove_entity_load_avg() consists of two parts. the first part is for
updating se's last_update_time and the second part is for removing
se's load from cfs_rq. it can become necessary to use only the first
part or second part, for the purpose of optimization. so this patch
sp
From: Byungchul Park
Current code can account fair class load average for the time the task
was absent from the fair class thanks to ATTACH_AGE_LOAD. However, it
doesn't work in the cases that either migration or group change happened
in the other sched classes.
This patch introduces more genera
From: Byungchul Park
the "sched/fair: make it possible to account fair load avg consistently"
patch makes rmb() and updating last_update_time called twice when doing a
migration, which can be negative at performance. actually we can optimize
it by omiting the updating part of remove_entity_load_a
From: Byungchul Park
* change from v3 to v4
- optimize - force to use rmb() once when doing migration
- optimize - not add additional variable to task_struct
- change the order of migrate_task_rq() and __set_task_cpu()
* change from v2 to v3
- consider otimization in the case of migration
- spli
From: Byungchul Park
In the case that rq->lock may not be held, care must be taken to get a
cfs_rq's last_update_time instead of just reading the variable. Since
it can happen at serveral places in the code in future, this patch
factors it out to a helper function.
Signed-off-by: Byungchul Park
From: Byungchul Park
This patch removes a weird coupling between se->avg.last_update_time and
the condition checking for migration, and introduce a new migration flag.
Now, scheduler can use the flag instead of se->avg.last_update_time to
check if migration already happened or not.
Signed-off-by
From: Byungchul Park
* change from v2 to v3
- consider otimization in the case of migration
- split patches to 3 to be reviewed easily
* change from v1 to v2
- make set_task_rq() do that role instead of migration callback
- make set_task_rq() do that role instead of move group callback
- remove
From: Byungchul Park
Current code can account fair class load average for the time the task
was absent from the fair class thanks to ATTACH_AGE_LOAD. However, it
doesn't work in the cases that either migration or group change happened
in the other sched classes.
This patch introduces more genera
From: Byungchul Park
There are some cases where distance between ticks is more then one tick,
while the cpu is not idle, e.g. full NOHZ.
However __update_cpu_load() assumes it is the idle tickless case if the
distance between ticks is more than 1, even though it can be the active
tickless case.
From: Byungchul Park
Even though the cpu is non-idle when its tick is stoped in full NOHZ,
current "update_cpu_load" code considers as if the cpu has been idle
unconditionally. It's wrong. This patch makes the "update_cpu_load"
code know if the calling path comes from full NOHZ or idle NOHZ.
Sig
From: Byungchul Park
change from v3 to v4
- focus the problem on full NOHZ
change from v2 to v3
- add a patch which make __update_cpu_load() handle active tickless
change from v1 to v2
- add some additional commit message (logic is same exactly)
i will try to fix other stuffs caused by full NO
From: Byungchul Park
Current code can account fair class load avg for time the task was on
other class e.g. rt or dl thanks to ATTACH_AGE_LOAD. However, it does
not work in the case that either migration or group change happened in
the other classes.
This patch introduces more general solution s
From: Byungchul Park
set_task_rq() which is a commonly used function regardless of sched class
is currently assigning both cfs_rq for fair class and rt_rq for rt class
to a task. but it would be better that a class related operation is done
by its own class respectively. additionally, this patch
From: Byungchul Park
* change from v1 to v2
- make set_task_rq() do that role instead of migration callback
- make set_task_rq() do that role instead of move group callback
- remove the dependancy between last_update_time and check for migration
Byungchul Park (2):
sched: make each sched class
From: Byungchul Park
Current fair sched class handles neither cgroup change nor rq migration
occured within other sched class e.g. rt class. This patch makes it can
do that.
Byungchul Park (2):
sched: make fair sched class can handle the cgroup change by other
class
sched: make fair sche
From: Byungchul Park
Original fair sched class can handle the migration within its class
with migrate_task_rq_fair(), but there is no way to know it if the
migration happened outside. This patch makes the fair sched class
can handle the migration which happened even at other sched class.
And care
From: Byungchul Park
Original fair sched class can handle the cgroup change occured within its
class with task_move_group_fair(), but there is no way to know it if the
change happened outside. This patch makes the fair sched class can handle
the change of cgroup which happened even at other sched
From: Byungchul Park
There are some cases where distance between ticks is more then one tick,
while the cpu is not idle, e.g.
- full NOHZ
- tracing
- long lasting callbacks
- being scheduled away when running in a VM
However __update_cpu_load() assumes it is the idle tickless case if the
di
From: Byungchul Park
in hrtimer_interrupt(), the first tick_program_event() can be failed
because the next timer could be already expired due to,
(see the comment in hrtimer_interrupt())
- tracing
- long lasting callbacks
- being scheduled away when running in a VM
in the case that the first ti
From: Byungchul Park
change from v3 to v2
- add a patch which make __update_cpu_load() handle active tickless
change from v1 to v2
- add some additional commit message (logic is same exactly)
Byungchul Park (2):
sched: make __update_cpu_load() handle active tickless case
sched: consider mis
From: Byungchul Park
hello,
i have already sent this patch about 1 month ago.
(see https://lkml.org/lkml/2015/8/13/160)
now, i am resending the same patch with adding some additional commit
message.
thank you,
byungchul
->8-
>From 8ece9a0482e74a39cd2e9165bf8eec1d04665fa9 Mon Sep 17 0
> -Original Message-
> From: Wanpeng Li [mailto:wanpeng...@hotmail.com]
> Sent: Tuesday, September 08, 2015 5:46 PM
> To: Byungchul Park
> Cc: Peter Zijlstra; Ingo Molnar; linux-kernel@vger.kernel.org;
> yuyang...@intel.com
> Subject: Re: [PATCH] sched: fix lose fair sleeper bonus in
swit
> -Original Message-
> From: Wanpeng Li [mailto:wanpeng...@hotmail.com]
> Sent: Tuesday, September 08, 2015 5:39 PM
> To: Byungchul Park
> Cc: Peter Zijlstra; Ingo Molnar; linux-kernel@vger.kernel.org;
> yuyang...@intel.com
> Subject: Re: [PATCH] sched: fix lose fair sleeper bonus in
swit
36 matches
Mail list logo