Simple check to prevent kernel panic when initcall does not exit
Signed-off-by: Abderrahmane Benbachir
---
init/main.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/init/main.c b/init/main.c
index 0ee9c6866ada..220fd2822b61 100644
--- a/init/main.c
+++ b/init/main.c
@@ -817,6 +817,9
David Daney a écrit :
On 10/27/2017 11:22 AM, Thomas Gleixner wrote:
On Fri, 27 Oct 2017, David Daney wrote:
On 10/27/2017 09:47 AM, Abderrahmane Benbachir wrote:
Simple check to prevent kernel panic when initcall does not exit
Interesting, under what circumstances do you observe the
trace_bootlevel_start_handler: Start early boot level <
[0.456659] : intel_pmu_init <-init_hw_perf_events
[0.457299] : p6_pmu_init <-intel_pmu_init
[0.457851] : numachip_system_init <-do_one_initcall
...
Signed-off-by: Abderrahmane Benbachir
Cc: Steven Rostedt
Cc:
Steven Rostedt a écrit :
On Tue, 20 Mar 2018 14:22:56 -0400
Abderrahmane Benbachir wrote:
Would something like this work for you?
Yes this is working great. I have also instrumented console and security
initcalls, I used your previous patch, changes are below
Hi Steve,
This is the patch for security & console initcall's instrumentation
Signed-off-by: Abderrahmane Benbachir
Cc: Steven Rostedt
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: linux-kernel@vger.kernel.org
---
kernel/printk/printk.c | 7 ++-
security/security.c| 8 +++-
+Paul, +Eddie and +Feng.
Hi Steven, any update regarding this patch. I'm including some folks
from server performance analysis team in microsoft, they are currently
investigating early boot-up latencies using ftrace.
Abderrahmane Benbachir a écrit :
Hi Steve,
I believe this pat
emory setup.
Dynamic tracing is not implemented with live patching, we use
ftrace_filter and ftrace_notrace to find which functions to be
filtered (traced / not traced), then during the callback from
mcount hook, we do binary search lookup to decide which function
to be save or not.
Signed-off-by: A
d with live patching, we use
ftrace_filter and ftrace_notrace to find which functions to be
filtered (traced / not traced), then during the callback from
mcount hook, we do binary search lookup to decide which function
to be save or not.
Signed-off-by: Abderrahmane Benbachir
Cc: Steven Rostedt
Cc
ed-off-by: Abderrahmane Benbachir
Cc: Steven Rostedt
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Mathieu Desnoyers
Cc: linux-kernel@vger.kernel.org
---
arch/x86/Kconfig| 1 +
arch/x86/kernel/ftrace_32.S | 45 --
arch/x86/kernel/ftrace_64.S | 14 ++
include/linux/ftrace.h | 16
/ not traced), then during the callback from
mcount hook, we do binary search lookup to decide which function
to save and which one to discard.
Signed-off-by: Abderrahmane Benbachir
Cc: Steven Rostedt
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Mathieu Desnoyers
Cc: linux-kernel@vger.kernel.org
---
ar
which function
to save and which one to discard.
Signed-off-by: Abderrahmane Benbachir
Cc: Steven Rostedt
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Mathieu Desnoyers
Cc: Linux Kernel
---
---
arch/x86/Kconfig| 1 +
arch/x86/kernel/ftrace_32.S | 45 --
arch/x86/kernel/ftrac
Hi Pavel,
I'm implementing a feature in ftrace to enable very early function
tracing, I'm using tsc when x86_tsc feature is available, but it seems
that you did similar work in your patch "[PATCH v9 0/6] Early boot
time stamps for x86".
I need to record timestamps at the start of start_ke
I implemented the ring_buffer_set_clock solution and I have some questions.
void __init ftrace_early_fill_ringbuffer(void *data)
{
...
ring_buffer_set_clock(tr->trace_buffer.buffer, early_trace_clock);
preempt_disable_notrace();
for (i = 0; i < vearly_entries_count; i++)
notrace to find which functions to be
filtered (traced / not traced), then during the callback from
mcount hook, we do binary search lookup to decide which function
to be save or not.
Signed-off-by: Abderrahmane Benbachir
Cc: Steven Rostedt
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Mathieu Desnoy
Pavel Tatashin a écrit :
Hi Abderrahmane,
I'm implementing a feature in ftrace to enable very early function tracing,
I'm using tsc when x86_tsc feature is available, but it seems that you did
similar work in your patch "[PATCH v9 0/6] Early boot time stamps for x86".
I need to record times
filtered
(traced / not traced), then during the callback from mcount hook, we do
binary search lookup to decide which function to be save or not.
Signed-off-by: Abderrahmane Benbachir
Cc: Steven Rostedt
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Mathieu Desnoyers
Cc: linux-kernel@vger.kernel.
Steven Rostedt a écrit :
ring_buffer_set_clock(tr->trace_buffer.buffer,
early_trace_clock);
Then have:
static u64 early_timestamp __initdata;
static __init u64 early_trace_clock(void)
{
return early_timestamp;
}
Then we can have:
+ p
17 matches
Mail list logo