On Tue, Sep 12, 2023 at 06:30:21PM +0300, Iuliana Prodan (OSS) wrote:
> From: Iuliana Prodan
>
> Add the reserve-memory nodes used by DSP when the rpmsg
> feature is enabled.
>
> Signed-off-by: Iuliana Prodan
> ---
> arch/arm64/boot/dts/freescale/imx8mp.dtsi | 13 +
> 1 file change
On Wed, Sep 20, 2023 at 02:10:09PM -0700, Luis Chamberlain wrote:
> Use glob include/linux/module*.h to capture all module changes.
>
> Suggested-by: Kees Cook
> Signed-off-by: Luis Chamberlain
Thanks!
Reviewed-by: Kees Cook
--
Kees Cook
-[ end trace a06cced81771bf57 ]---
[ 43.617997][T1] Testing event empty_synth_test:
[ 43.618000][T1] Enabled event during self test!
[ 43.619614][T1] Testing event gen_synth_test:
The kernel config and materials to reproduce are available at:
https://download.01
figured in localtime, applying delta
of 0 minutes to system time.
The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20230924/202309242253.a3803da4-oliver.s...@intel.com
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
Hi,
On Tue, 5 Sep 2023 09:52:51 +0800
"wuqiang.matt" wrote:
> +/* cleanup all percpu slots of the object pool */
> +static void objpool_fini_percpu_slots(struct objpool_head *head)
> +{
> + int i;
> +
> + if (!head->cpu_slots)
> + return;
> +
> + for (i = 0; i < head->nr
As the legacy lru provides, the lru_gen needs some trace events for
debugging.
This commit introduces 2 trace events.
trace_mm_vmscan_lru_gen_scan
trace_mm_vmscan_lru_gen_evict
Each event is similar to the following legacy events.
trace_mm_vmscan_lru_isolate,
trace_mm_vmscan_lru_shrink_[i
On Wed, 20 Sep 2023 22:15:37 -0400
Steven Rostedt wrote:
> From: "Steven Rostedt (Google)"
>
> Using the following code with libtracefs:
>
> int dfd;
>
> // create the directory events/kprobes/kp1
> tracefs_kprobe_raw(NULL, "kp1", "schedule_timeout", "time=$arg1");
>
>
From: Masami Hiramatsu (Google)
Add a note about the argument and return value accecss will be best
effort. Depending on the type, it will be passed via stack or a
pair of the registers, but $argN and $retval only support the
single register access.
Suggested-by: Alexei Starovoitov
Signed-off-b
From: Masami Hiramatsu (Google)
Update fprobe document so that the entry/exit handler uses ftrace_regs
instead of pt_regs.
Signed-off-by: Masami Hiramatsu (Google)
Acked-by: Florent Revest
---
Documentation/trace/fprobe.rst | 14 ++
1 file changed, 6 insertions(+), 8 deletions(-
From: Masami Hiramatsu (Google)
Enable kprobe_multi feature if CONFIG_FPROBE is enabled. The pt_regs is
converted from ftrace_regs by ftrace_partial_regs(), thus some registers
may always returns 0. But it should be enough for function entry (access
arguments) and exit (access return value).
Sig
From: Masami Hiramatsu (Google)
Allow fprobe events to be enabled with CONFIG_DYNAMIC_FTRACE_WITH_ARGS.
With this change, fprobe events mostly use ftrace_regs instead of pt_regs.
Note that if the arch doesn't enable HAVE_PT_REGS_COMPAT_FTRACE_REGS,
fprobe events will not be able to be used from p
From: Masami Hiramatsu (Google)
Add ftrace_fill_perf_regs() which should be compatible with the
perf_fetch_caller_regs(). In other words, the pt_regs returned from the
ftrace_fill_perf_regs() must satisfy 'user_mode(regs) == false' and can be
used for stack tracing.
Signed-off-by: Masami Hiramat
From: Masami Hiramatsu (Google)
Add ftrace_partial_regs() which converts the ftrace_regs to pt_regs.
If the architecture defines its own ftrace_regs, this copies partial
registers to pt_regs and returns it. If not, ftrace_regs is the same as
pt_regs and ftrace_partial_regs() will return ftrace_re
From: Masami Hiramatsu (Google)
Change the fprobe exit handler and rethook to use ftrace_regs structure
instead of pt_regs. This also introduce HAVE_PT_REGS_TO_FTRACE_REGS_CAST
which means the ftrace_regs's memory layout is equal to the pt_regs so
that those are able to cast. Only if it is enable
From: Masami Hiramatsu (Google)
In order to be able to use ftrace_regs even from features unrelated to
function tracer (e.g. kretprobe), expose ftrace_regs structures and
APIs even if the CONFIG_FUNCTION_TRACER=n.
Signed-off-by: Masami Hiramatsu (Google)
Acked-by: Florent Revest
---
Changes i
From: Masami Hiramatsu (Google)
This allows fprobes to be available with CONFIG_DYNAMIC_FTRACE_WITH_ARGS
instead of CONFIG_DYNAMIC_FTRACE_WITH_REGS, then we can enable fprobe
on arm64.
Signed-off-by: Masami Hiramatsu (Google)
Acked-by: Florent Revest
---
Changes in v3:
- Use FTRACE_OPS_FL_S
From: Masami Hiramatsu (Google)
Add a comment about the requirements of the ftrace_regs if it is
implemented on the arch-dependent code with
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS.
Signed-off-by: Masami Hiramatsu (Google)
---
include/linux/ftrace.h |8
1 file changed, 8 insertions(+
From: Masami Hiramatsu (Google)
Add a new ret_ip callback parameter description.
Fixes: cb16330d1274 ("fprobe: Pass return address to the handlers")
Signed-off-by: Masami Hiramatsu (Google)
Acked-by: Florent Revest
---
Changes in v4:
- Update ret_ip description (Thanks Florent!)
---
Docume
From: Masami Hiramatsu (Google)
Since ftrace_func_t requires to pass 'struct ftrace_regs *' as the 4th
argument even if FTRACE_OPS_FL_SAVE_REGS is not set, ftrace_caller must
pass 'struct ftrace_regs *', which is a partial pt_regs, on the stack
to the ftrace_func_t functions, so that the ftrace_f
Hi,
Here is the 5th version of the series to use ftrace_regs instead of pt_regs
in fprobe.
The previous version is here;
https://lore.kernel.org/all/169280372795.282662.9784422934484459769.stgit@devnote2/
In this version, I decided to use perf's own per-cpu pt_regs array to
copy the required reg
From: Zheng Yejian
[ Upstream commit f6bd2c92488c30ef53b5bd80c52f0a7eee9d545a ]
When user resize all trace ring buffer through file 'buffer_size_kb',
then in ring_buffer_resize(), kernel allocates buffer pages for each
cpu in a loop.
If the kernel preemption model is PREEMPT_NONE and there are
From: Zheng Yejian
[ Upstream commit f6bd2c92488c30ef53b5bd80c52f0a7eee9d545a ]
When user resize all trace ring buffer through file 'buffer_size_kb',
then in ring_buffer_resize(), kernel allocates buffer pages for each
cpu in a loop.
If the kernel preemption model is PREEMPT_NONE and there are
From: Zheng Yejian
[ Upstream commit f6bd2c92488c30ef53b5bd80c52f0a7eee9d545a ]
When user resize all trace ring buffer through file 'buffer_size_kb',
then in ring_buffer_resize(), kernel allocates buffer pages for each
cpu in a loop.
If the kernel preemption model is PREEMPT_NONE and there are
From: "Steven Rostedt (Google)"
[ Upstream commit 95a404bd60af6c4d9d8db01ad14fe8957ece31ca ]
When iterating over the ring buffer while the ring buffer is active, the
writer can corrupt the reader. There's barriers to help detect this and
handle it, but that code missed the case where the last ev
From: Zheng Yejian
[ Upstream commit f6bd2c92488c30ef53b5bd80c52f0a7eee9d545a ]
When user resize all trace ring buffer through file 'buffer_size_kb',
then in ring_buffer_resize(), kernel allocates buffer pages for each
cpu in a loop.
If the kernel preemption model is PREEMPT_NONE and there are
From: Zheng Yejian
[ Upstream commit f6bd2c92488c30ef53b5bd80c52f0a7eee9d545a ]
When user resize all trace ring buffer through file 'buffer_size_kb',
then in ring_buffer_resize(), kernel allocates buffer pages for each
cpu in a loop.
If the kernel preemption model is PREEMPT_NONE and there are
From: "Steven Rostedt (Google)"
[ Upstream commit 95a404bd60af6c4d9d8db01ad14fe8957ece31ca ]
When iterating over the ring buffer while the ring buffer is active, the
writer can corrupt the reader. There's barriers to help detect this and
handle it, but that code missed the case where the last ev
From: "Steven Rostedt (Google)"
[ Upstream commit 95a404bd60af6c4d9d8db01ad14fe8957ece31ca ]
When iterating over the ring buffer while the ring buffer is active, the
writer can corrupt the reader. There's barriers to help detect this and
handle it, but that code missed the case where the last ev
From: Zheng Yejian
[ Upstream commit f6bd2c92488c30ef53b5bd80c52f0a7eee9d545a ]
When user resize all trace ring buffer through file 'buffer_size_kb',
then in ring_buffer_resize(), kernel allocates buffer pages for each
cpu in a loop.
If the kernel preemption model is PREEMPT_NONE and there are
From: "Steven Rostedt (Google)"
[ Upstream commit 95a404bd60af6c4d9d8db01ad14fe8957ece31ca ]
When iterating over the ring buffer while the ring buffer is active, the
writer can corrupt the reader. There's barriers to help detect this and
handle it, but that code missed the case where the last ev
From: Zheng Yejian
[ Upstream commit f6bd2c92488c30ef53b5bd80c52f0a7eee9d545a ]
When user resize all trace ring buffer through file 'buffer_size_kb',
then in ring_buffer_resize(), kernel allocates buffer pages for each
cpu in a loop.
If the kernel preemption model is PREEMPT_NONE and there are
31 matches
Mail list logo