Commit-ID: e846d13958066828a9483d862cc8370a72fadbb6
Gitweb: https://git.kernel.org/tip/e846d13958066828a9483d862cc8370a72fadbb6
Author: Zhou Chengming
AuthorDate: Thu, 2 Nov 2017 09:18:21 +0800
Committer: Ingo Molnar
CommitDate: Tue, 7 Nov 2017 12:20:09 +0100
kprobes, x86/alternatives
ewed-by: Masami Hiramatsu
Acked-by: Steven Rostedt (VMware)
Signed-off-by: Zhou Chengming
---
arch/x86/kernel/alternative.c | 26 +-
kernel/extable.c | 2 ++
2 files changed, 15 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kernel/alternative.c b/arch
ewed-by: Masami Hiramatsu
Signed-off-by: Zhou Chengming
---
arch/x86/kernel/alternative.c | 26 +-
kernel/extable.c | 2 ++
2 files changed, 15 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 3344
llback to solve this.
But there is a simpler way to handle this problem. We can reuse the
text_mutex to protect smp_alt_modules instead of using another mutex.
And all the arch dependent checks of kprobes are inside the text_mutex,
so it's safe now.
Reviewed-by: Masami Hiramatsu
Sign
The alternatives_smp_lock/unlock only be used on UP, so we don't
need to hold the text_mutex when text_poke(). Then in the next patch,
we can remove the outside smp_alt mutex too.
Signed-off-by: Zhou Chengming
---
arch/x86/kernel/alternative.c | 4
1 file changed, 4 deletions(-)
list check, it's safe now.
Signed-off-by: Zhou Chengming
---
arch/x86/kernel/alternative.c | 13 ++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 7eab6f6..b278cad 100644
--- a/arch/x86/kernel/alter
abled, so we don't need a mutex to protect
the list, only need to use preempt_disable().
We can make sure smp_alt_modules will be useless after enable smp,
so free it all. And alternatives_smp_module_del() can return directly
when !uniproc_patched to avoid a list traversal.
Signed-off-by: Zhou C
Previous two patches can make sure the smp_alt_modules will only
be used when UP, so we don't need a mutex to protect the list,
we only need to preempt_disable() when traverse the list.
Signed-off-by: Zhou Chengming
---
arch/x86/kernel/alternative.c | 31 +++
1
roblem. We can reuse the
text_mutex to protect smp_alt_modules instead of using another mutex.
And all the arch dependent checks of kprobes are inside the text_mutex,
so it's safe now.
Signed-off-by: Zhou Chengming
---
arch/x86/kernel/alternative.c | 24 +++-
1 file chan
When check_kprobe_address_safe() return fail, the probed_mod
should be set to NULL, because no module refcount held. And we
initialize probed_mod to NULL in register_kprobe() for the same reason.
Signed-off-by: Zhou Chengming
---
kernel/kprobes.c | 3 ++-
1 file changed, 2 insertions(+), 1
and
register the same kprobe. This patch put the check inside the mutex.
Suggested-by: Masami Hiramatsu
Signed-off-by: Zhou Chengming
---
kernel/kprobes.c | 27 ---
1 file changed, 8 insertions(+), 19 deletions(-)
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index
. And alternatives_smp_module_del() can return directly
when !uniproc_patched to avoid a list traversal.
Signed-off-by: Zhou Chengming
---
arch/x86/kernel/alternative.c | 11 +--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
been
registered already, but check_kprobe_rereg() will release the
kprobe_mutex then, so maybe two paths will pass the check and
register the same kprobe. This patch put the check inside the mutex.
Signed-off-by: Zhou Chengming
---
kernel/kprobes.c | 28 +---
1 file
Old code use check_kprobe_rereg() to check if the kprobe has been
registered already, but check_kprobe_rereg() will release the
kprobe_mutex then, so maybe two paths will pass the check and
register the same kprobe. This patch put the check inside the mutex.
Signed-off-by: Zhou Chengming
ask_A to make sure the task_A is
still on the rq1, even though we hold the rq1->lock. This patch will
repick the first pushable task to be sure the task is still on the rq.
Signed-off-by: Zhou Chengming
---
kernel/sched/rt.c | 49 +++--
1 file c
Commit-ID: 75e8387685f6c65feb195a4556110b58f852b848
Gitweb: http://git.kernel.org/tip/75e8387685f6c65feb195a4556110b58f852b848
Author: Zhou Chengming
AuthorDate: Fri, 25 Aug 2017 21:49:37 +0800
Committer: Ingo Molnar
CommitDate: Tue, 29 Aug 2017 13:29:29 +0200
perf/ftrace: Fix double
Obviously, trace_events that defined staticly in trace.h won't use
__TRACE_LAST_TYPE, so make dynamic types can use it. And some
minor changes to trace_search_list() to make code clearer.
Signed-off-by: Zhou Chengming
---
kernel/trace/trace_output.c | 12 ++--
1 file chang
's not NULL.
Signed-off-by: Zhou Chengming
---
include/linux/perf_event.h | 2 +-
include/linux/trace_events.h| 4 ++--
kernel/events/core.c| 13 +
kernel/trace/trace_event_perf.c | 4 +++-
kernel/trace/trace_kprobe.c | 4 ++--
kernel/trace/trace_s
s special, it may contain _ddebugs of other
modules, the modname of which is different from the name of livepatch
module. So ddebug_remove_module() can't use mod->name to find the
right ddebug_table and remove it. It can cause kernel crash when we cat
the file /dynamic_debug/control.
Sign
The else branch are broken for taskctx: two events can on the same
taskctx, but on different cpu. This patch fix it, we don't need to
check move_group. We first make sure we're on the same task, or both
per-cpu events, and then make sure we're events for the same cpu.
Sign
patch changes it to use module_kallsyms_on_each_symbol() for modules
symbols. After we apply this patch, the sys time reduced dramatically.
~ time sudo insmod klp.ko
real0m1.007s
user0m0.032s
sys 0m0.924s
Signed-off-by: Zhou Chengming
---
kernel/livepatch/core.c | 5 -
1 file chang
odule symbols, so will waste
a lot of time. This patch changes it to use module_kallsyms_on_each_symbol()
for modules symbols.
After we apply this patch, the sys time reduced dramatically.
~ time sudo insmod klp.ko
real0m1.007s
user0m0.032s
sys 0m0.924s
Signed-off-by: Zhou Chen
When we activate policy on the request_queue, we will create policy_date
for all the existing blkgs of the request_queue, so we should call
pd_init_fn() and pd_online_fn() on these newly created policy_data.
Signed-off-by: Zhou Chengming
---
block/blk-cgroup.c | 6 ++
1 file changed, 6
When we activate policy on the request_queue, we will create policy_date
for all the existing blkgs of the request_queue, so we should call
pd_init_fn() and pd_online_fn() on these newly created policy_data.
Signed-off-by: Zhou Chengming
---
block/blk-cgroup.c | 6 ++
1 file changed, 6
From: z00354408
Signed-off-by: z00354408
---
block/blk-cgroup.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 8ba0af7..0dd9e76 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -1254,6 +1254,12 @@ int blkcg_activate_policy(stru
Commit-ID: 3a09b8d45b3c05d49e581831de626927c37599f8
Gitweb: http://git.kernel.org/tip/3a09b8d45b3c05d49e581831de626927c37599f8
Author: Zhou Chengming
AuthorDate: Sun, 22 Jan 2017 15:22:35 +0800
Committer: Ingo Molnar
CommitDate: Sun, 22 Jan 2017 10:34:17 +0100
sched/Documentation
group A should be 5us, then the period and
runtime of group B should be 5us and 25000us.
Signed-off-by: Zhou Chengming
---
Documentation/scheduler/sched-rt-group.txt |8
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/Documentation/scheduler/sched-rt-group.t
Commit-ID: 4e71de7986386d5fd3765458f27d612931f27f5e
Gitweb: http://git.kernel.org/tip/4e71de7986386d5fd3765458f27d612931f27f5e
Author: Zhou Chengming
AuthorDate: Mon, 16 Jan 2017 11:21:11 +0800
Committer: Thomas Gleixner
CommitDate: Tue, 17 Jan 2017 11:08:36 +0100
perf/x86/intel
hed_started)
spin_lock // not executed
intel_stop_scheduling()
set state->sched_started = false
if (!state->sched_started)
spin_unlock// excuted
Signed-off-by: NuoHan Qiao
Signed-off-by: Zhou Chengmi
hed_started)
spin_lock // not executed
intel_stop_scheduling()
set state->sched_started = false
if (!state->sched_started)
spin_unlock// excuted
Signed-off-by: NuoHan Qiao
Signed-off-by: Zhou Chengmi
hed_started)
spin_lock // not executed
intel_stop_scheduling()
set state->sched_started = false
if (!state->sched_started)
spin_unlock// excuted
Signed-off-by: NuoHan Qiao
Signed-off-by: Zhou Chengming
---
m/lists/oss-security/2016/11/04/13
Reported-by: CAI Qian
Tested-by: Yang Shukui
Signed-off-by: Zhou Chengming
---
fs/proc/proc_sysctl.c |3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
index 5d931bf..c4c90bd 100644
--- a/f
Fixes CVE-2016-9191.
Reported-by: CAI Qian
Tested-by: Yang Shukui
Signed-off-by: Zhou Chengming
---
fs/proc/proc_sysctl.c |3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
index 5d931bf..c4c90bd 100644
--- a/fs/proc
Allow wakeup_dl tracer to be used by instances, like wakeup tracer
and wakeup_rt tracer.
Signed-off-by: Zhou Chengming
---
kernel/trace/trace_sched_wakeup.c |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/kernel/trace/trace_sched_wakeup.c
b/kernel/trace
In !global_reclaim(sc) case, we should update sc->nr_reclaimed after each
shrink_slab in the loop. Because we need the correct sc->nr_reclaimed
value to see if we can break out.
Signed-off-by: Zhou Chengming
---
mm/vmscan.c |5 +
1 files changed, 5 insertions(+), 0 deletions(-)
When CONFIG_SPARSEMEM_EXTREME is disabled, __section_nr can get
the section number with a subtraction directly.
Signed-off-by: Zhou Chengming
---
mm/sparse.c | 12 +++-
1 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/mm/sparse.c b/mm/sparse.c
index 5d0cf45..36d7bbb
r_running to calculate the whole __sched_period value.
Signed-off-by: Zhou Chengming
---
kernel/sched/fair.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0fe30e6..59c9378 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair
ert the up_read/spin_unlock order as Andrea Arcangeli said.
Signed-off-by: Zhou Chengming
Suggested-by: Andrea Arcangeli
Reviewed-by: Andrea Arcangeli
---
mm/ksm.c | 15 ++-
1 files changed, 10 insertions(+), 5 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index ca6d2a0..1d4c0e8 10064
ert the up_read/spin_unlock order as Andrea Arcangeli said.
Signed-off-by: Zhou Chengming
Suggested-by: Andrea Arcangeli
Reviewed-by: Andrea Arcangeli
---
mm/ksm.c | 16 ++--
1 files changed, 10 insertions(+), 6 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index ca6d2a0..b6dc387 10064
ert the up_read/spin_unlock order as Andrea Arcangeli said.
Signed-off-by: Zhou Chengming
Suggested-by: Andrea Arcangeli
---
mm/ksm.c | 17 ++---
1 files changed, 10 insertions(+), 7 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index ca6d2a0..d87bafc 100644
--
p;mm->mmap_sem), will cause mmap_sem.count to become -1.
I changed the scan_get_next_rmap_item function refered to the khugepaged
scan function.
Signed-off-by: Zhou Chengming
---
mm/ksm.c |7 ++-
1 files changed, 2 insertions(+), 5 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
inde
When enable KASLR, livepatch will adjust old_addr of changed
function accordingly. So do the same thing for reloc.
[PATCH v1] https://lkml.org/lkml/2015/11/4/91
Reported-by: Cyril B.
Signed-off-by: Zhou Chengming
---
kernel/livepatch/core.c |6 ++
1 files changed, 6 insertions(+), 0
When enable KASLR, func->old_addr will be set to zero
and livepatch will find the right old address.
But for reloc, livepatch just verify it using reloc->val
(old addr from user), so verify failed and report
"kernel mismatch" error.
Reported-by: Cyril B.
Signed-off-by
43 matches
Mail list logo