Add bounds check on values read from shared memory in the tx path. In
cases where the VM is misbehaving, the transport should exit and print a
warning when bogus values may cause out of bounds to be read.
Link:
https://git.codelinaro.org/clo/la/kernel/msm-5.10/-/commit/32d9c3a2f2b6a4d1fc48d687119
The pull request you sent on Wed, 10 Jan 2024 23:26:32 -0800:
> git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm.git
> tags/libnvdimm-for-6.8
has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/a3cc31e75185f9b1ad8dc45eac77f8de788dc410
Thank you!
--
Deet-doot-
Implement port for given CID as input argument instead of using
hardcoded value '1234'. This allows to run different test instances
on a single CID. Port argument is not required parameter and if it is
not set, then default value will be '1234' - thus we preserve previous
behaviour.
Signed-off-by:
On Sun, 17 Dec 2023 19:44:56 -0600, Huang, Kai wrote:
>
> The point is, with or w/o this patch, you can only reclaim 16 EPC
pages
> in one
> function call (as you have said you are going to remove
> SGX_NR_TO_SCAN_MAX,
> which is a cipher to both of us). The only difference I can see is,
On Fri, 12 Jan 2024 10:06:41 -0500
Steven Rostedt wrote:
> I'm thinking both may be good, as the number of dropped events are not
> added if there's no room to put it at the end of the ring buffer. When
> there's no room, it just sets a flag that there was missed events but
> doesn't give how man
On 2023-12-18 09:52:05 [+0100], Daniel Borkmann wrote:
> Hi Sebastian,
Hi Daniel,
> Please exclude netkit from this set given it does not support XDP, but
> instead only accepts tc BPF typed programs.
okay, thank you.
> Thanks,
> Daniel
Sebastian
On Fri, 12 Jan 2024 09:13:02 +
Vincent Donnefort wrote:
> > > +
> > > + unsigned long subbufs_touched;
> > > + unsigned long subbufs_lost;
> > > + unsigned long subbufs_read;
> >
> > Now I'm thinking we may not want this exported, as I'm not sure it's
> > useful.
>
> touched and
On Fri, 12 Jan 2024 08:53:44 -0500
Steven Rostedt wrote:
> > // We managed to open the directory so we have permission to list
> > // directory entries in "xfs".
> > fd = open("/sys/kernel/tracing/events/xfs");
> >
> > // Remove ownership so we can't open the directory anymore
> > chown("/sys/ke
On Fri, 12 Jan 2024 09:27:24 +0100
Christian Brauner wrote:
> On Thu, Jan 11, 2024 at 04:53:19PM -0500, Steven Rostedt wrote:
> > On Thu, 11 Jan 2024 22:01:32 +0100
> > Christian Brauner wrote:
> >
> > > What I'm pointing out in the current logic is that the caller is
> > > taxed twice:
> > >
On Mon, 8 Jan 2024 at 09:44, Daniel Baluta wrote:
>
> On Fri, Jan 5, 2024 at 6:02 PM Ulf Hansson wrote:
> >
> > Updates in v2:
> > - Ccing Daniel Baluta and Iuliana Prodan the NXP remoteproc patches
> > to
> > requests help with testing.
> > - Fixed NULL pointer bug in pa
From: Masami Hiramatsu (Google)
Update fprobe documentation for the new fprobe on function-graph
tracer. This includes some bahvior changes and pt_regs to
ftrace_regs interface change.
Signed-off-by: Masami Hiramatsu (Google)
---
Changes in v2:
- Update @fregs parameter explanation.
---
Doc
From: Masami Hiramatsu (Google)
This test case repeats define and undefine the fprobe dynamic event to
ensure that the fprobe does not cause any issue with such operations.
Signed-off-by: Masami Hiramatsu (Google)
---
.../test.d/dynevent/add_remove_fprobe_repeat.tc| 19 ++
From: Masami Hiramatsu (Google)
Since the fprobe event does not support maxactive anymore, stop
testing the maxactive syntax error checking.
Signed-off-by: Masami Hiramatsu (Google)
---
.../ftrace/test.d/dynevent/fprobe_syntax_errors.tc |4 +---
1 file changed, 1 insertion(+), 3 deletions(
From: Masami Hiramatsu (Google)
Remove depercated fprobe::nr_maxactive. This involves fprobe events to
rejects the maxactive number.
Signed-off-by: Masami Hiramatsu (Google)
---
Changes in v2:
- Newly added.
---
include/linux/fprobe.h |2 --
kernel/trace/trace_fprobe.c | 44 +
From: Masami Hiramatsu (Google)
Rewrite fprobe implementation on function-graph tracer.
Major API changes are:
- 'nr_maxactive' field is deprecated.
- This depends on CONFIG_DYNAMIC_FTRACE_WITH_ARGS or
!CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS, and
CONFIG_HAVE_FUNCTION_GRAPH_FREGS. So cur
From: Masami Hiramatsu (Google)
Enable kprobe_multi feature if CONFIG_FPROBE is enabled. The pt_regs is
converted from ftrace_regs by ftrace_partial_regs(), thus some registers
may always returns 0. But it should be enough for function entry (access
arguments) and exit (access return value).
Sig
From: Masami Hiramatsu (Google)
Allow fprobe events to be enabled with CONFIG_DYNAMIC_FTRACE_WITH_ARGS.
With this change, fprobe events mostly use ftrace_regs instead of pt_regs.
Note that if the arch doesn't enable HAVE_PT_REGS_COMPAT_FTRACE_REGS,
fprobe events will not be able to be used from p
From: Masami Hiramatsu (Google)
Add ftrace_fill_perf_regs() which should be compatible with the
perf_fetch_caller_regs(). In other words, the pt_regs returned from the
ftrace_fill_perf_regs() must satisfy 'user_mode(regs) == false' and can be
used for stack tracing.
Signed-off-by: Masami Hiramat
From: Masami Hiramatsu (Google)
Add ftrace_partial_regs() which converts the ftrace_regs to pt_regs.
If the architecture defines its own ftrace_regs, this copies partial
registers to pt_regs and returns it. If not, ftrace_regs is the same as
pt_regs and ftrace_partial_regs() will return ftrace_re
From: Masami Hiramatsu (Google)
Change the fprobe exit handler to use ftrace_regs structure instead of
pt_regs. This also introduce HAVE_PT_REGS_TO_FTRACE_REGS_CAST which means
the ftrace_regs's memory layout is equal to the pt_regs so that those are
able to cast. Fprobe introduces a new dependen
From: Masami Hiramatsu (Google)
This allows fprobes to be available with CONFIG_DYNAMIC_FTRACE_WITH_ARGS
instead of CONFIG_DYNAMIC_FTRACE_WITH_REGS, then we can enable fprobe
on arm64.
Signed-off-by: Masami Hiramatsu (Google)
Acked-by: Florent Revest
---
Changes in v6:
- Keep using SAVE_REG
From: Masami Hiramatsu (Google)
Enable CONFIG_HAVE_FUNCTION_GRAPH_FREGS on arm64. Note that this
depends on HAVE_DYNAMIC_FTRACE_WITH_ARGS which is enabled if the
compiler supports "-fpatchable-function-entry=2". If not, it
continue to use ftrace_ret_regs.
Signed-off-by: Masami Hiramatsu (Google)
From: Masami Hiramatsu (Google)
Support HAVE_FUNCTION_GRAPH_FREGS on x86-64, which saves ftrace_regs
on the stack in ftrace_graph return trampoline so that the callbacks
can access registers via ftrace_regs APIs.
Note that this only recovers 'rax' and 'rdx' registers because other
registers are
From: Masami Hiramatsu (Google)
Add a new return handler to fgraph_ops as 'retregfunc' which takes
parent_ip and ftrace_regs instead of ftrace_graph_ret. This handler
is available only if the arch support CONFIG_HAVE_FUNCTION_GRAPH_FREGS.
Note that the 'retfunc' and 'reregfunc' are mutual exclus
From: Masami Hiramatsu (Google)
Add a new entry handler to fgraph_ops as 'entryregfunc' which takes
parent_ip and ftrace_regs. Note that the 'entryfunc' and 'entryregfunc'
are mutual exclusive. You can set only one of them.
Signed-off-by: Masami Hiramatsu (Google)
---
Changes in v3:
- Updat
From: Steven Rostedt (VMware)
Add boot up selftest that passes variables from a function entry to a
function exit, and make sure that they do get passed around.
Signed-off-by: Steven Rostedt (VMware)
Signed-off-by: Masami Hiramatsu (Google)
---
Changes in v2:
- Add reserved size test.
- U
From: Masami Hiramatsu (Google)
Improve push and data reserve operation on the shadow stack for
several sequencial interrupts.
To push a ret_stack or data entry on the shadow stack, we need to
prepare an index (offset) entry before updating the stack pointer
(curr_ret_stack) so that unwinder fro
From: Steven Rostedt (VMware)
Added functions that can be called by a fgraph_ops entryfunc and retfunc to
store state between the entry of the function being traced to the exit of
the same function. The fgraph_ops entryfunc() may call
fgraph_reserve_data() to store up to 32 words onto the task's
From: Steven Rostedt (VMware)
The use of the task->trace_recursion for the logic used for the function
graph no-trace was a bit of an abuse of that variable. Now that there
exists global vars that are per stack for registered graph traces, use
that instead.
Signed-off-by: Steven Rostedt (VMware)
From: Steven Rostedt (VMware)
The use of the task->trace_recursion for the logic used for the function
graph depth was a bit of an abuse of that variable. Now that there
exists global vars that are per stack for registered graph traces, use that
instead.
Signed-off-by: Steven Rostedt (VMware)
S
From: Steven Rostedt (VMware)
The use of the task->trace_recursion for the logic used for the
set_graph_funnction was a bit of an abuse of that variable. Now that there
exists global vars that are per stack for registered graph traces, use that
instead.
Signed-off-by: Steven Rostedt (VMware)
Si
From: Steven Rostedt (VMware)
Add a "task variables" array on the tasks shadow ret_stack that is the
size of longs for each possible registered fgraph_ops. That's a total
of 16, taking up 8 * 16 = 128 bytes (out of a page size 4k).
This will allow for fgraph_ops to do specific features on a per
From: Masami Hiramatsu (Google)
Since the fgraph_array index is used for the bitmap on the shadow
stack, it may leave some entries after a function_graph instance is
removed. Thus if another instance reuses the fgraph_array index soon
after releasing it, the fgraph may confuse to call the newer c
From: Steven Rostedt (VMware)
Allow for instances to have their own ftrace_ops part of the fgraph_ops
that makes the funtion_graph tracer filter on the set_ftrace_filter file
of the instance and not the top instance.
This also change how the function_graph handles multiple instances on the
shado
From: Steven Rostedt (VMware)
Some of the flags for ftrace_startup() may be exposed even when
CONFIG_DYNAMIC_FTRACE is not configured in. This is fine as the difference
between dynamic ftrace and static ftrace is done within the internals of
ftrace itself. No need to have use cases fail to compil
From: Steven Rostedt (VMware)
Now that function graph tracing can handle more than one user, allow it to
be enabled in the ftrace instances. Note, the filtering of the functions is
still joined by the top level set_ftrace_filter and friends, as well as the
graph and nograph files.
Signed-off-by:
From: Steven Rostedt (VMware)
Pass the fgraph_ops structure to the function graph callbacks. This will
allow callbacks to add a descriptor to a fgraph_ops private field that wil
be added in the future and use it for the callbacks. This will be useful
when more than one callback can be registered
From: Steven Rostedt (VMware)
The function pointers ftrace_graph_entry and ftrace_graph_return are no
longer called via the function_graph tracer. Instead, an array structure is
now used that will allow for multiple users of the function_graph
infrastructure. The variables are still used by the a
From: Steven Rostedt (VMware)
Allow for multiple users to attach to function graph tracer at the same
time. Only 16 simultaneous users can attach to the tracer. This is because
there's an array that stores the pointers to the attached fgraph_ops. When
a function being traced is entered, each of t
From: Steven Rostedt (VMware)
Add an array structure that will eventually allow the function graph tracer
to have up to 16 simultaneous callbacks attached. It's an array of 16
fgraph_ops pointers, that is assigned when one is registered. On entry of a
function the entry of the first item in the a
From: Steven Rostedt (VMware)
Instead of using "ALIGN()", use BUILD_BUG_ON() as the structures should
always be divisible by sizeof(long).
Link:
http://lkml.kernel.org/r/2019052444.gi2...@hirez.programming.kicks-ass.net
Suggested-by: Peter Zijlstra
Signed-off-by: Steven Rostedt (VMware)
From: Steven Rostedt (VMware)
In order to make it possible to have multiple callbacks registered with the
function_graph tracer, the retstack needs to be converted from an array of
ftrace_ret_stack structures to an array of longs. This will allow to store
the list of callbacks on the stack for th
From: Masami Hiramatsu (Google)
Add ftrace_regs definition for x86_64 in the ftrace header to
clarify what register will be accessible from ftrace_regs.
Signed-off-by: Masami Hiramatsu (Google)
---
Changes in v3:
- Add rip to be saved.
Changes in v2:
- Newly added.
---
arch/x86/include/a
From: Masami Hiramatsu (Google)
Rename ftrace_regs_return_value to ftrace_regs_get_return_value as same as
other ftrace_regs_get/set_* APIs.
Signed-off-by: Masami Hiramatsu (Google)
Acked-by: Mark Rutland
---
Changes in v6:
- Moved to top of the series.
Changes in v3:
- Newly added.
---
From: Masami Hiramatsu (Google)
To clarify what will be expected on ftrace_regs, add a comment to the
architecture independent definition of the ftrace_regs.
Signed-off-by: Masami Hiramatsu (Google)
Acked-by: Mark Rutland
---
Changes in v3:
- Add instruction pointer
Changes in v2:
- newl
From: Masami Hiramatsu (Google)
The commit 60c8971899f3 ("ftrace: Make DIRECT_CALLS work WITH_ARGS
and !WITH_REGS") changed DIRECT_CALLS to use SAVE_ARGS when there
are multiple ftrace_ops at the same function, but since the x86 only
support to jump to direct_call from ftrace_regs_caller, when we
Hi,
Here is the 6th version of the series to re-implement the fprobe on
function-graph tracer. The previous version is;
https://lore.kernel.org/all/170290509018.220107.1347127510564358608.stgit@devnote2/
This version fixes use-after-unregister bug and arm64 stack unwinding
bug [13/36], add an im
On Thu, Jan 11, 2024 at 06:23:20PM -0500, Steven Rostedt wrote:
> On Thu, 11 Jan 2024 11:34:58 -0500
> Mathieu Desnoyers wrote:
>
>
> > The LTTng kernel tracer has supported mmap'd buffers for nearly 15 years
> > [1],
> > and has a lot of similarities with this patch series.
> >
> > LTTng has
On Thu, Jan 11, 2024 at 04:53:19PM -0500, Steven Rostedt wrote:
> On Thu, 11 Jan 2024 22:01:32 +0100
> Christian Brauner wrote:
>
> > What I'm pointing out in the current logic is that the caller is
> > taxed twice:
> >
> > (1) Once when the VFS has done inode_permission(MAY_EXEC, "xfs")
> > (2)
49 matches
Mail list logo